Smart Faculty Evaluation: A Mobile App Using NLP-Based Sentiment Analysis and Random Forest for Faculty Assessment at Universidad De Manila
- Andrei D. Espina
- Joan S. Jose
- Joffrey Luna
- James Maico M. Velasco
- Ronald Fernandez
- 5365-5381
- Oct 14, 2025
- Artificial intelligence
Smart Faculty Evaluation: A Mobile App Using NLP-Based Sentiment Analysis and Random Forest for Faculty Assessment at Universidad De Manila
Andrei D. Espina*., Joan S. Jose., Joffrey Luna., James Maico M. Velasco., Ronald Fernandez
College of Computing Studies, Universidad De Manila, Philippines
*Corresponding Author
DOI: https://dx.doi.org/10.47772/IJRISS.2025.909000434
Received: 24 September 2025; Accepted: 30 September 2025; Published: 14 October 2025
ABSTRACT
Faculty evaluation is an essential tool of teaching competence assurance and institutional quality progression. Nevertheless, a significant number of academic institutions still use traditional approaches which are repetitive, subjective and narrow in scale. The current study focuses on creating the Smart Faculty Evaluation: A Mobile Application with NLP-based Sentiment Analysis and Random Forest, which is designed specifically to be used in the Universidad de Manila. The system incorporates Natural Language Processing (NLP) to automatically analyze open-ended student feedback in a systematic way and convert qualitative feedback into formatted, evidence-based insights. Meanwhile, the Random Forest method is used to enhance the precision and consistency of the classification and measure the performance of the faculty. The project was developed based on Agile development principles, and with the assistance of these principles, it was designed, tested, and refined in an iterative manner to be responsive to the needs of users and the requirements of the institution.
The system was tested in terms of ISO/IEC 25010 software quality standards. The findings indicated overall weighted mean scores of between 4.46 and 4.53, which can be seen as Above average or Excellent in terms of functional suitability, performance efficiency, reliability, security, portability and usability. These results prove that the suggested app will be effective, reliable, and convenient to operate. On the whole, the Smart Faculty Evaluation App is an automated and formatted approach to the evaluation of faculty which increases the levels of transparency, accessibility and efficacy in the evaluation of faculty.
Keywords: Artificial Intelligence, Faculty Evaluation, Natural Language Processing, Sentiment Analysis, Random Forest Algorithm
INTRODUCTION
Artificial Intelligence (AI) is the imitation of human intelligence in computers so that systems can evaluate data, identify patterns, and make choices. Machine Learning (ML), a branch of AI, enables systems to learn and improve from data without pre-coded rules. In Education, AI and ML facilitate automation, customized insights, and fact-based decision-making. Technologies such as Random Forest, and Natural Language Processing (NLP) are particularly helpful for detecting trends, categorizing feedback, analyzing sentiments, and making precise predictions that enhance the accuracy and effectiveness of faculty evaluations.
At Universidad de Manila, the assessments are currently conducted on Google Forms (GForms). Although this gives a simple framework, it does not process data in real time, does not handle qualitative feedback well, and does not give much insight. To overcome these challenges, the Smart Faculty Evaluation app applies sentiment analysis using NLP and predictive modeling through Random Forest algorithms. The students log in, select a faculty member, and fill out a form with numerical ratings and text comments.
The system immediately processes the input, interpreting sentiments and forecasting performance trends. Results are aggregated into elaborate reports accessible via a dashboard, enabling faculty members to comprehend strengths and opportunities for growth. The system will also provide real-time insights and automatically derived recommendations for faculty growth. This smart, data-based solution upgrades the current assessment practices into a faster, simpler, and fairer process aligning with Universidad de Manila’s mission of improving education quality through contemporary technology.
LITERATURE REVIEW
1) Web-based faculty evaluation system of Apayao State College, Philippines. JPAIR Institutional Research: Taguiam [8] developed a web-based faculty evaluation system at Apayao State College as a replacement for the previously used manual system, making the process more convenient and efficient. While the system successfully digitized the evaluation process, it was limited to numerical student ratings and did not analyze qualitative feedback. In contrast, the Smart Faculty Evaluation app integrates AI-driven sentiment analysis using NLP, enabling objective interpretation of student comments and transforming simple assessments into intelligent, evidence-based responses.
2) Development and Validation of a Faculty Performance Evaluation Instrument: Dimaculangan and Medallon [4] developed and validated a faculty performance evaluation tool designed for flexible learning environments, with results showing strong reliability. However, the instrument was limited to traditional survey questions administered on an as-needed basis, without intelligent performance analysis or structured feedback. Building on this foundation, the present study incorporates NLP and Random Forest into a mobile application, serving as an intermediary between conventional evaluation tools and AI-supported analysis.
3) Faculty performance evaluation in a Philippine university Information technology program. IOSR Journal of Humanities and Social Science: Caluza et al. [9] presented a systematic approach to evaluating faculty performance in the Information Technology program of a Philippine higher education institution. While the study introduced a more structured evaluation process, it remained dependent on traditional survey-based assessments without predictive or intelligent features. Building on prior literature, the article highlighted the potential of applying Random Forest modeling to generate evidence-based insights. A key contribution of the study was its combination of quantitative ratings with machine learning to predict faculty performance and provide practical recommendations.
4) Students’ Evaluation of Faculty-Prepared Instructional Modules: Inferences for instructional materials review and revision: Hamora et al. [10] conducted online assessments to evaluate faculty-prepared instructional modules, focusing on identifying strengths and weaknesses in content areas that required improvement. While their study emphasized the relevance of student perspectives, the process was largely manual and unable to handle large volumes of qualitative data. The present study addresses this limitation by integrating NLP-based sentiment analysis to automate feedback classification. This contribution enables the development of a scalable and intelligent system capable of processing both numerical and textual information in real time.
5) HighTeach: A Web-based Teacher Evaluation System for a Higher Learning Institution in the Philippines: Amboya et al. [11] designed, developed, and tested the HighTeach web-based teacher evaluation system using the SDLC framework. While the system successfully provided a platform for digital assessment, it was not deployed, maintained, or equipped for in-depth analysis. Building on this foundation, the present study introduces AI-based sentiment analysis and Random Forest modeling to establish an intelligent, real-time system for faculty development and performance improvement.
6) Faculty Use of End-of-Course Evaluations: Walker and Rocconi [2] highlighted that end-of-course evaluations are widely used as tools to improve courses and teaching strategies, but most faculty perceived them as neither fair nor effective. The approach remained primarily survey-based and lacked mechanisms to analyze qualitative feedback or generate predictive insights. In contrast, the present study applies sentiment analysis and Random Forest modeling to provide real-time, evidence-based insights that make faculty evaluations more objective and actionable.
7) Evaluating Teachers’ Performance through Aspect-Based Sentiment Analysis: Bhowmik et al. [6] applied aspect-based sentiment analysis (ABSA) to assess teacher performance based on student opinions. Their study analyzed more than 2 million comments using a BiLSTM-based model to identify constructive, negative, and neutral elements of teaching. Results showed that the method is reliable in providing specific and impartial insights, making it effective for performance appraisal and supporting the improvement of teaching practices.
8) Teaching Evaluation Index of College Students Based on Random Forest: Jiang and Huang et al. [5] applied the Random Forest algorithm to model and analyze the indicators of student evaluation. As an example, they designed a teaching evaluation questionnaire for Programming Fundamentals and tested it with the help of Random Forest by comparing measured and predicted values of scores. Results indicated that the teaching effect and language expression had the greatest significance, serving as valuable references for enhancing evaluation scales in higher education.
9) The Impact of Grade Inflation on Teachers’ Evaluation: Maamari and Naccache [1] conducted a quantitative study examining the influence of GPA, major, and university type (public or private) on students’ perceptions of teacher evaluation. Findings revealed a strong relationship between grade inflation and student views of faculty performance, showing that students often use the evaluation process to influence grading practices. The study emphasized concerns that evaluations may not fully capture teaching quality, as they can be shaped by student expectations for lenient assessment methods.
10) Performance Evaluation Practices of Select Higher Education Institution in the City of Manila: Gutierrez [3] evaluated faculty performance assessment practices in selected higher education institutions in Manila through student surveys, peer evaluations, and superior assessments. Results showed that both academic leaders and faculty generally viewed the existing evaluation methods as effective, with no significant differences between the two groups. However, variations emerged when faculty rank or position was considered, highlighting the need for a more comprehensive evaluation framework that incorporates diverse perspectives.
11) Designing a Teacher Recommender System: A Thematic Literature Review of Teacher Evaluation Systems: Ortiz and Dumlao [7] reviewed existing teacher evaluation systems and noted that they often rely heavily on numerical ratings while overlooking qualitative feedback. Their study found that VADER with a modified Filipino lexicon was effective for sentiment analysis, whereas Latent Dirichlet Allocation (LDA) performed well for topic modeling. Based on these findings, they proposed TeachAIRs, a teacher recommender system that generates real-time insights and feedback to enhance teaching practices and improve student learning outcomes.
Conceptual Framework
The conceptual framework shows the connection components of the Smart Faculty Evaluation System and offers a systematic framework of automating faculty evaluation. It describes the input, process and the output, in which students enter the evaluation data, the system utilizes the evaluation data by performing sentiment analysis and performance assessment and presents the results to the faculty and administration. This framework assists in the study to illustrate how the users will interact with the system, how the system will handle and analyze results of the evaluation, and how the results of the system, including categorized feedback and performance insight, are sent to the users and in an accurate and meaningful way to aid in the improvement of the faculty.
Fig. 1 Input-Process-Output Model
Fig. 1 shows the conceptual map of the Faculty Evaluation System that shows how evaluation data is received by students and sent to faculty in a systematic and automated way. The model demonstrates how the inputs, including evaluation restrictions defined by the admin and ratings and comments submitted by students, are involved in the process of review and approval prior to the generation of outputs, including faculty performance outputs and AI-generated recommendations. This systematic methodology provides a high-quality, unbiased, and informative experience in evaluation among all the stakeholders.
Input Phase.
This starts with the admin applying restrictions, such as which class, faculty, and subject will be assessed, when, and where students will do evaluations. Students then give their feedback by posting the ratings and comments in accordance with the evaluation criteria presented. The phase makes sure that appropriate and timely information is gathered to be processed.
Process Phase.
After the student reviews have been sent and stored in the database, the data will be analyzed by the admin, who will check its validity and then approve or disapprove the data. Once approved, the evaluations are completed and presented in the results section of the faculty.
Output Phase.
Sentiment Analysis and Random Forest AI functions are activated by the system when the faculty sees their results. Sentiment Analysis separates the comments into positive, negative, and neutral, and the Random Forest considers evaluation scores and recommends what to improve according to low or average ratings. The system then issues a detailed report to the faculty, including categorized comments and general ratings and AI-generated recommendations to support the development of a professional and enhance teaching quality.
METHODOLOGY
Methodology defines how data is collected, how the system is designed, implemented, and tested, and ensures that the study is carried out in a logical and structured manner. This chapter presents the methodology used in developing the Smart Faculty Evaluation, a mobile application for feedback and performance assessment system for Academic Institutions. This provides a systematic and logical way of ensuring the reliability, efficiency, and effectiveness of the system.
Research Design
Fig. 2 Agile Methodology
The research design was based on a developmental research design, which involved systematic development and testing of the Smart Faculty Evaluation System. This design was chosen to make sure that the final system is efficient, accurate, and easy to use, and hence meet the demands of faculty, students, and administrators. The system itself has been created to facilitate the process of faculty evaluation and automate it, incorporating such features as AI-based sentiment analysis and feedback aggregation. This process was systematic, and it underwent several important phases, such as design, coding, testing, and refinement. Finally, surveys, system performance analysis, and expert validation were used to gather the required data to validate the overall accuracy and usability of the deployed system.
1) Plan: The proponents used interviews as the initial stage of the Agile methodology to determine the feasibility and the need to enhance the faculty evaluation process. The main stakeholders were also consulted, with the Dean of the College of Computing Studies (previously College of Engineering and Technology) and the Director of the Information and Communications Technology Office (ICTO). These interviews aimed to collect the necessary information regarding the limitations of the existing system, including the elements, like the monitoring of the participation, redundancy of the data, as well as the ability of the faculty to access the results of evaluation.
Fig. 3 Key Stakeholder Interviews and Student Survey Data Collection
Besides the interviews, the proponents also designed and conducted a student survey. The survey was done to get the student’ views and receive feedback regarding their experience in the current evaluation process and what to expect in a more effective and user-friendly system. These basic data gathering exercises are recorded in Fig. 3. This pre-planning stage was the basis of requirements identification and prioritization of improvements that will lead to the iterative process of the proposed faculty evaluation system development.
2) Design: The proponents developed system diagrams, workflows, and prototypes to show the structure and functionality of the proposed faculty evaluation system.
Fig. 4 Initial Mobile User Interface Mockup and Navigation Flow for Student (Figma)
The student dashboard example shown in Fig. 4 gives an overview of how the student completes the evaluation. Inside the student dashboard view, the student is first presented with a list of faculty members that is in need to be evaluated. Selecting a faculty member will move the student to the evaluation form, where students must answer a set of questions composed of a Likert scale to rate faculty performance and a comment box for additional written comments. Once the student has completed the evaluation, their inputs will be collected and stored in the system. This design ensures that students can easily navigate the evaluation process while maintaining accuracy and efficiency.
Fig. 5 Functional Overview of Student Dashboard to Firebase (PlantUML)
The figure above demonstrates the Initial Data Flow Diagram of the Student Dashboard. This illustrates the student’s process of using the system and how the information is saved in Firebase. The first feature of the Student Dashboard is that students can see their profile information, which is pulled from the students’ collection. The next feature allows students to engage with evaluations, which requires students to answer evaluation forms. The answers are stored in the student evaluations collection. Students are also able to submit evaluations, in addition to submitting comments that are submitted from a different area. The comments returned to the collection of student evaluations/comments, which is stored in Firebase. Notifications are sent to the student to inform that there is an evaluation in need of completion and system updates through the Notifications Collection. Finally, the logout process closes the data flow by allowing students to safely exit the application. All processes in the Student Dashboard are linked directly to the Firebase collection, meaning any engagement by the student with the Student Dashboard will be both pulled and saved in Firebase.
Fig. 6 Initial Mobile User Interface Mockup and Navigation Flow for Admin (Figma)
The team created the prototype of the Admin dashboard in Figma as an interactive user interface (UI), as exemplified by the mockups in Fig. 6. These prototypes illustrated the flow of administrative tasks, including managing academic years, classes, faculty, students, and subjects. The admin can also create and edit questionnaires by defining question criteria, configure system restrictions, and send notifications to users. In addition, the admin can fetch and view evaluation results before distributing the results to the respective professor or faculty members.
Fig..7 Functional Overview of Admin Dashboard to Firebase (PlantUML)
The diagram shown above is the data flow diagram of the Admin Dashboard. It demonstrates the flow of processes, which begins with Manage Academic Years. In Manage Academic Years, the admin can add, edit, or archive school years and the school years are stored in the academic years data collection. The next process is Manage Classes. In Manage Classes, the admin can create classes or sections which are then stored in the classes collection. In Manage Faculty, the admin can register faculty and other details and make edits, such as notifications. There is a distinct collection (faculty notifications) where all faculty notifications are stored, and the overall Faculty information is recorded in the faculty collection. Manage Students processes enrollment and student information and is stored in the student’s collection. Manage Subjects allows the admin to assign or edit subjects, and this data is stored in the subject’s collection. Manage Questionnaires prepares the evaluation forms, with data stored in two distinct data collections (question criteria and questions). Manage Notifications will send announcements or alerts and are stored in the notifications collection. Manage Restrictions enforces rules, such for example that evaluations can only take place within a specified date range; this data is stored in the restrictions collection. Finally, View Evaluation Results grants access to evaluation summary and reports or analytics and retrieves data from the student evaluations collection. This process flow demonstrates how each Admin function directly interacts with a specific Firebase collection to provide efficiency and data accuracy in the system.
3) Develop: The core development of the Smart Faculty Evaluation System centers around the integration of analytic features to accommodate qualitative and quantitative data at the same time. This dual-process capability allows the system to provide more thorough insights from student evaluations, converting raw scores and also the data into the recommendations.
For the qualitative feedback, the system uses a systematic Sentiment Analysis based on Natural Language Processing (NLP). A pretrained NLP model (trained on human language content) was used to automatically interpret the emotional tone of students’ written comments. The algorithm categorized each text into three sentiment categories, which are positive, negative, and also the neutral. This allowed the system to provide subjective feedback to provide instructors with a preliminary understanding of how students generally felt about their teaching methods and behaviors. After analysis, the data are then stored in Firebase for data management, where it is analyzed and later on visualized for the Faculty Dashboard, providing sentiment graphs or summary charts. This made it easy for faculty to recognize and measure the overall tone of student feedback and identify common challenges or areas that an instructor needed to pay attention to.
To evaluate quantitative feedback, the system used the Random Forest Algorithm to assess student ratings across evaluation indicators such as professionalism, teaching methodology, classroom engagement, and clarity of instruction. The algorithm allows the system to identify low-performing areas through different evaluation indicators and then provide suitable recommendations via traversing through multiple decision trees. Instead of simply providing averages, this algorithm allows for predictive modeling, providing faculty members with specific data-driven recommendations about their professional growth. These insights enabled instructors to focus on issues that impact student learning outcomes and utilize the evaluation data or the feedback.
Firebase also served as the system’s core backend structure, providing real-time data collection, processing, and also the presentation features. Firebase processes user authentications, uses encrypted data storage, and provides role-specific access features so students, faculty, and administrators can only access features relevant to their respective roles. When evaluations were completed, the Firebase real-time database instantly stored and synchronized this data securely. The admins were given the capability to approve, review, or reject evaluation entries, ensuring data integrity and also the validation before the inclusion in the faculty analytics. Once approved, these records were synchronized instantly to the Faculty Dashboard, allowing instructors to view updated sentiment analysis and performance metrics.
Fig. 8 UML Activity Diagram of the Faculty Dashboard
As shown in Fig. 8, the UML Activity Diagram demonstrates the workflow of the Faculty Dashboard. After the administrator approves the evaluation data, the results are made available to faculty who can initially see a summary graph. Then faculty can decide to process comments with the help of Sentiment Analysis or process ratings with the help of the Random Forest model. This workflow explains the way this system provides the integration of qualitative and quantitative analytics within one process, and the results of the evaluation are available, secure and technically significant. When the evaluation is not approved, the data will not be shown to faculty and preserve data integrity and administrative control.
Fig. 9 Faculty Dashboard User Interface
The system also incorporated interactive visualization in order to make it even more user-friendly. Fig. 9, the Faculty Dashboard User Interface shows how results are presented to instructors. A summary graph is given to the faculty members according to the academic periods like prelim, midterm, and final. Using this screen, the faculty may choose specific buttons to analyze ratings with the help of the Random Forest model, which generates weak areas identification and focused advice, or analyze comments with the help of Sentiment Analysis to see the categorized student responses. When no further analysis is chosen, the dashboard will simply show the summary graph.
By implementing NLP sentiment classification, Random Forest predictive modeling, Firebase backend integration, and data visualization, the system transformed the evaluation process from a simple record of scores and comments into a comprehensive, intelligent decision-support tool. This ensured that faculty could interpret evaluation results effectively and use it as a foundation for continuous improvement.
4) Testing: The proponents underwent a series of testing exercises in collaboration with the ICTO Director to understand and get the system right, along with the students to gather user experiences with the application or system. One of the most important collaborative sessions at this stage is documented in Figure 4.4.1. Feedback was also taken, and adjustments were made to make sure that the system was up to expectations and gave reliable evaluation results. Bugs and errors that were detected while testing were addressed as soon as possible and this helped to guarantee smooth performance. Agile testing has been used in all stages of development to ensure quality, security, and responsiveness in all devices.
Fig. 10 Documentation for Student Application Testing and Feedback Assessment
Fig. 11 Documentation of Agile Testing and Stakeholder Feedback Session
5) Deployment: The system was implemented as small steps using an Agile approach, with a soft launch or beta version of the system to pilot test in the College of Computing Science (previously the College of Engineering and Technology). Faculty and students, along with the ICTO Director, participated in this pilot testing to enable real users to test the system and give initial feedback. Refinements were done on the interface and functionality of the system according to this initial input. Importantly, a quantitative assessment of 200 students was conducted to strictly define the overall performance and quality of Smart Faculty Evaluation: A Mobile App Using NLP-Based Sentiment Analysis and Random Forest to Evaluation Metrics. This thorough validation was only done after the system was fully implemented over the whole academic institution.
6) Review: Students, faculty, and administrators were surveyed after the deployment to determine the clarity, speed, and usability of the system. This step was performed periodically, prior to and after each evaluation term (Prelim, Midterm, Final) to maintain the seamless running of the operations and to solve the arising problems in a timely manner. The review process allowed the team to make improvements, refine the AI algorithms to give more accurate suggestions, and change the interface to create a better user experience.
7) Launch: The launch stage was the official introduction of the final version of the Smart Faculty Evaluation System to all users. The system was then available to be used fully after being tested, reviewed, and approved. The system kept being updated and improved with features as time passed to ensure that it was functional, relevant, and in tandem with changing academic needs.
Data Collection Techniques
The data collected was implemented in two phases to guarantee both the strict requirements definition and post-development validation. The first stage was based on Requirements Gathering, with a mixed-method approach that involved qualitative interviews with administrative heads (ICTO Director, Deans, and Faculty) to determine the main areas of operations that are required, and preliminary surveys among students to understand their expectations of usability and willingness to adopt. The resulting data defined the overall functional and non-functional requirements of the system. System Validation was the second phase which was carried out after the pilot implementation of the application. This stage was associated with a System Quantitative Evaluation, during which a standardized usability and effectiveness tool was applied to a stratified group of 200 students to evaluate user satisfaction and the quality of the system.
Sampling Methods
It utilized purposive sampling during the requirements gathering phase and system validation phase because the respondents had to fulfill certain requirements that should be the main users of the proposed system or be among the key stakeholders. The population of the sample was selected in the College of Computing Science (previously College of Engineering and Technology). In the first qualitative and administrative interviews, the stakeholders, such as the administrators and selected faculty members, were selected intentionally due to their expertise and knowledge on the existing evaluation process. To validate the subsequent quantitative system, a final sample size of 200 students was purposely chosen to be representative of the main user base, where the usability and quality of the system could be evaluated in the real-world setting by the targeted end-users.
Data Analysis Procedures
The information gathered through surveys and interviews was analyzed by percentage and means as well as frequency distribution to find out user preferences and system requirements. Throughout the implementation process, the sentiment analysis algorithm classified the comments as either positive, negative, or neutral, and the random forest algorithm indicated areas that can be improved by the faculty according to the evaluation scores. This discussion made sure that the final system met the needs of the students as well as the administrative needs.
Validity and Reliability Measures
The evaluation forms and questionnaires were well-constructed to represent the official faculty assessment standards of the institution to maintain the validity of the assessment. The system was pilot tested on actual users and it was adjusted according to their feedback. Reliability was ensured by performing repeated tests in various iterations of the system, in which the results were found to be consistent. This proved the fact that the system was able to store, process, and show evaluation results correctly and without any error.
Ethical Considerations
The research was conducted in total compliance with institutional ethical standards; all involvement of the participants, both the interviews and the submission of the survey responses by the student and faculty respondents, was purely voluntary and with full disclosure of the purpose of the study. The process of data collection, which involves information collection of these people, did not start until a formal permission letter was obtained and accepted by the Dean of the College of Computing Studies and the ICTO Director. Strict confidentiality was observed in all the responses and was only utilized in the purpose of the research. In order to uphold fairness and security in the developed application, strong technical controls were put in place: data were secured with Firebase Authentication and secure database management, access controls were implemented to ensure that only authorized users could see or approve evaluations, and systems were established to ensure that the evaluative comments of students could not be altered, so that the integrity of the faculty assessment process is preserved.
RESULTS AND DISCUSSION
Result
A comprehensive quantitative evaluation was carried out to determine the effectiveness and quality of the Smart Faculty Evaluation: A Mobile App Using NLP-Based Sentiment Analysis and Random Forest. The study which included 200 students as respondents filled in complete surveys and undertook system tests and performance checks, to accurately measure how well the app functioned in meeting its user-based needs based on the criteria set in the quality model ISO/IEC 25010 Baseline Software Quality Model.
1) Functional Suitability: The overall weighted mean for the system was 4.49 (Above Average). All the indicators were rated Excellent. However, the clarity and organization aspects of the evaluation form had a slightly lower rating (4.46 Above Average).
Fig. 12 Mean Scores for Functional Suitability Assessment
The application performed well in meeting its primary purpose to provide accurate and complete evaluations. The users stated that the system has a dependable process, but it is necessary to note that a decreased rating indicates that improved clarification on the form design should be made.
2) Performance Efficiency: The average weighted mean of 4.50 (Excellent) was established. The stability rating under multiple users and the time it takes to load (responsiveness) revealed a value above 4.50.
Fig 13 Mean Scores for Performance Efficiency Assessment
This shows that the app performs well in usage even in higher demand scenarios. In the system large amounts of content were loaded timely with no delays or latency, an important factor for use in larger systems.
3) Portability: The system scored a 4.46 (Above Average), and we saw it function very well on several devices and networks, although it just missed the mark slightly in terms of its multilingual support.
Fig. 14 Mean Scores for Portability Assessment
The app performed well on Android devices and operated on different types of networks (WIFI and Mobile Data), but there is certainly room for better support for various languages regarding comments to improve accessibility for various users.
4) Reliability: The overall average score was 4.51 (Excellent) and rated high when subject to sustained high demand with a failing internet connection.
Fig. 14 Mean Scores for Reliability Assessment
Reliability is one of the greatest indicators of evaluation of the WIFI systems and a measure of trust in the system with difficult conditions. This is important in the use of WIFI for needed academic purposes when the evaluation periods may have more than one person needing to use the WIFI at the same time.
5) Security: When it comes to the security of the app, the system scored 4.51 (Excellent). Above all categories under Security, Data Protection Compliance, and Secure Login received the highest ratings.
Fig. 16 Mean Scores for Security Assessment
The high ratings of security indicate the users’ trust in the app’s security. Role-based access and Firebase Authentication ensure sensitive academic and personal information is protected.
6) Usability: Overall, the app achieved a score of 4.48 (Above Average), and the rating for the app interface was quite high in terms of clarity and usability.
Fig. 17 Mean Scores for Usability Assessment
The ratings for the app’s color scheme and overall appeal scored slightly slow. However, confirmed by the user experience, it was very intuitive, convenient, and easy to use. The findings may point to design considerations enabling improved satisfaction in user experience surrounding interface color and overall design appeal.
7) Evaluation Metrics: The highest scores were 4.46 to 4.53 across all the ISO/IEC 25010 criteria, and the highest scores were in reliability and security. Functional suitability, performance efficiency, and usability also did quite well, and portability was a little bit lower but still above average. These findings verify that the app meets the fundamental needs of functionality, efficiency, and security and provide areas of improvement.
Fig. 18 Overall Mean of ISO/IEC 25010 Quality Metrics
DISCUSSION
The measurement metrics clearly show that the Smart Faculty Evaluation app is a high performance, secure and reliable system of faculty assessment. The high reliability and security scores show that users not only had confidence in the system but also considered it reliable when used in different conditions. Performance efficiency also increased its capacity to promote institutional use on a large scale.
The portability and usability were rated slightly lower, which indicates future improvements. Multilingual NLP support and interface design refinements can potentially increase the reach and attract more users to the app. Such findings highlight the fact that the system is already of international quality but can be improved to achieve the maximum of its flexibility and the long-term effect.
CONCLUSION
Conclusion
The Smart Faculty Evaluation app has been a useful solution to the modernization of the faculty assessment process at the Universidad de Manila. Combining the NLP-based sentiment analysis with the Random Forest algorithm, the system did not only computerize the evaluation process but also turned the unstructured student feedback into structured insights and data-driven recommendations. Its high performance in terms of ISO/IEC 25010 standards assured that it is operational, dependable, and safe, and easy to use by students and faculty members. More importantly, the paper identifies the benefits of qualitative and quantitative assessment together to present more equitable, transparent, and practical outcomes as part of the more evidence-based process of institutional decision-making and continuous quality assurance in education.
Recommendations
In the future, the system can be enhanced in a few ways to enhance its flexibility and sustainability. By improving multilingual NLP, the system can be improved to analyze feedback in Tagalog and other local dialects, which will make it more accommodating to a diverse student population. Clarity, accessibility and user satisfaction can also be optimized by refining the user interface and general design. The Random Forest element can be extended with predictive analytics, which will allow the system to predict performance trends and determine which faculty might need early assistance. It will also be necessary to create an iOS version to make the app more accessible to users operating on a variety of devices and conduct constant reviews and updates according to the feedback provided by the users so that the app could remain relevant to the changing needs of the academic community.
ACKNOWLEDGEMENT
The authors would like to express sincere gratitude to the Universidad de Manila – College of Computing Studies for giving the chance to conduct this study and support during the project. Thanks to the ICTO to assist with guidance in the different phases, and the adviser, the panel members, classmates, and faculty to provide their valuable feedback and support. Lastly, families and friends are also thanked for being very supportive and patient and of course, to the Almighty God who has made this project possible.
REFERENCES
- Maamari, B. E., & Naccache, H. S. (2022). The impact of grade inflation on teachers’ evaluation: A quantitative study conducted in the context of five Lebanese universities. Journal of Global Education and Research, 6(2), 192–205. DOI: 10.5038/2577-509X.6.2.1169
- Pacheco Diaz, N., Walker, J. P., Rocconi, L. M., Morrow, J. A., Skolits, G. J., Osborne, J. D., & Parlier, T. R. (2022). Faculty use of end-of-course evaluations. International Journal of Teaching and Learning in Higher Education, 33(3), 285–297. DOI: isetl.org/ijtlhe/
- Gutierrez, E. B. (2023). Performance evaluation practices of select higher education institution in the City of Manila: Basis for enhancing faculty performance. International Journal of Multidisciplinary: Applied Business and Education Research, 4(12), 4239–4243. DOI: 10.11594/ijmaber.04.12.07
- Medallon, M. C., & Dimaculangan, G. A. (2022). Development and validation of a faculty performance evaluation in a flexible learning environment: A student instrument. PAPSCU Excellent Academic Research Link (PEARL) Bulletin, 3(1), 17–26. DOI: ejournals.ph/article.php?id=18330
- Jiang, M., Huang, X., Liu, D., & Hu, S. (2023). Teaching evaluation index of college students based on random forest. In 2023 3rd International Conference on Educational Technology (ICET) (pp. 110–114). IEEE. DOI: 10.1109/ICET59358.2023.10424334
- Bhowmik, A., Noor, N. M., Mazid-Ul-Haque, M., Miah, M. S. U., & Karmaker, D. (2024). Evaluating teachers’ performance through aspect-based sentiment analysis. In 2024 IEEE 9th International Conference for Convergence in Technology (I2CT) (pp. 1–6). IEEE. DOI: 10.1109/I2CT61223.2024.10543706
- Ortiz, M. G., & Dumlao, M. (2025). Designing a teacher recommender system: A thematic literature review of teacher evaluation systems. Journal of Interdisciplinary Perspectives, 3(9), 557–567. DOI: 10.69569/jip.2025.513
- Taguiam, I. M. (2016). Web-based faculty evaluation system of Apayao State College, Philippines. JPAIR Institutional Research, 7(1), 1–14. DOI: 10.7719/irj.v7i1.367
- Caluza, L. J. B., Function, D. G. D., & Verecio, R. L. (2017). Faculty performance evaluation in a Philippine university—Information technology program. IOSR Journal of Humanities and Social Science, 22(9), 28–36. DOI: 10.9790/0837-2209082836
- Hamora, L. A., Rabaya, M. B., Pentang, J. T., Pizaña, A., & Gamozo, M. J. (2022). Students’ evaluation of faculty-prepared instructional modules: Inferences for instructional materials review and revision. Journal of Education, Management and Development Studies, 2(2), 20–29. DOI: 10.52631/jemds.v2i2.109
- Amboya, J. M., Francisco, R. M., Hernandez, R. J., Opeña, J. S., Samson, I. V., & Olipas, C. N. P. (2022). HighTeach: A web-based teacher evaluation system for a higher learning institution in the Philippines. African Journal of Advanced Pure and Applied Sciences (AJAPAS), 1(4), 8–15. DOI: researchgate.net/publication/364060914