Development of Differentiated Numeracy Assessment Tools for Kindergarten with Learning Difficulties in Public School Classrooms
- Daniel Joy C. Aquino
- Arabella D. De Jesus
- Jan Katherine L. Guadalupe
- Jobelle D. Pangilinan
- Maria Francesca M. Sale
- Joseline M. Santos
- Joel B. Faustino
- Esther C. Domingo
- 913-926
- Jul 30, 2025
- Education
Development of Differentiated Numeracy Assessment Tools for Kindergarten with Learning Difficulties in Public School Classrooms
Daniel Joy C. Aquino, Arabella D. De Jesus, Jan Katherine L. Guadalupe, Jobelle D. Pangilinan, Maria Francesca M. Sale, Joseline M. Santos, Joel B. Faustino, Esther C. Domingo
Bulacan State University, City of Malolos, Bulacan, Philippines
DOI: https://dx.doi.org/10.47772/IJRISS.2025.90700073
Received: 19 June 2025; Accepted: 23 June 2025; Published: 30 July 2025
ABSTRACT
Assessment tools in Philippine public schools were underutilized due to limited research and inadequate teacher training. This study focused on the development of differentiated numeracy assessment tools for kindergarten students with learning difficulties in public school classrooms. It examined the tools’ effectiveness, reliability, validity, and impact on learners’ performance. Using an exploratory sequential design, the study involved three teachers and 45 kindergarten learners from a public elementary school in the City of Meycauayan, Bulacan. A pencil-and-paper test was used to categorize learners by skill levels during pretest, followed by differentiated tools used during the post-test. Qualitative data were gathered from teacher interviews, while quantitative data came from test scores of learners. Results showed significant improvement in learners’ scores after using the tools. It highlighted the need for more differentiated assessment tools in other kindergarten subjects, aiming to make assessments engaging and less pressuring.
Keywords: Kindergarten, Numeracy, Differentiated, Assessment tools, Learning Difficulties
INTRODUCTION
Assessment of learning was a cornerstone of the educational process, providing essential insights into learners’ understanding, skills, and progress. Through well-designed assessment, teachers had been able to gather evidence of learning outcomes, gauge the effectiveness of instructional strategies, and identify areas where learners required additional support. These assessments, whether formative or summative, had served not only as indicators of individual learners’ achievement but also as reflections of the overall educational quality within a learning environment. These traditional tools include traditional tests and assessments that measure academic readiness and progress (Zhou, 2023). However, Nkomo and Charamba (2022) emphasized that traditional assessment methods, such as examinations, tests, and essays, have been criticized for their inability to effectively capture the diverse learning outcomes expected from different courses. These conventional practices had often failed to accommodate the varied abilities and backgrounds of the learners.
In recent years, there has been an increased focus on aligning assessment methods with learning goals, fostering an approach that supports student-centered learning and emphasizes critical thinking, problem-solving, and applied knowledge. Assessment tools in public schools have not usually been known due to a lack of research about the field. Studies had shown that there had been insufficient teacher training in public schools in the Philippines (Cagasan et al., 2020).
It pointed out that many teachers still had little knowledge and skills to effectively implement assessment strategies. Factors such as the location and resources of public-school teachers have made training and matter-loading for efficient and effective assessment tools harder to achieve. The study of Aylward (2020) emphasized that one of the critical challenges in assessing young children has been selecting the appropriate assessment tools. This has been crucial because the tools need to be suitable for the specific age group and developmental stage of the child being evaluated. Similar to this, Keary et al. (2020) found that teachers struggled to engage families in the assessment process, which has been essential for understanding children’s development and learning needs. This lack of collaboration could have limited the tool’s effectiveness.
Assessment should have gauged all learning for all learners. Promoting differentiated assessment not only by limiting it to paper-and-pencil tests that could have catered to persons with learning difficulties among learners, but also by including all children of different races, religious differences, and sexual orientations has been highly considered (Langston, 2020). It is important that not only do teachers provide assessments for all learners, but also consider the diversity of learners within it, and make it fit their learning capacities and capabilities. If assessment tools were not aligned with the educational curriculum or learning objectives, teachers might not have provided meaningful insights into their learners’ progress, wherein this misalignment could have led to confusion and ineffective teaching practices (Mahamud, 2021).
With this, this research aimed to discover the different assessment tools used by the kindergarten class of a public elementary school in the City of Meycauayan, Bulacan. This helped the researchers understand the similarities and differences of each learner in a classroom and craft an assessment tool for numeracy that took note of all considerations and suggestions made by teachers. By this, the methods and the objectives of the research that the group suggest were to first, explore the effectiveness of different assessment tools currently used for numeracy by kindergarten teachers, second, learn how teachers use it in public school classrooms, third, uncover what was the long-term impact of these assessments used by kindergarten classrooms and how it could be replicated by other classrooms to another, and lastly, developed a differentiated assessment tools that could cater to the different learning needs of learners and examined its effectivity.
The researchers conducted interviews and pilot testing through the use of differentiated assessment tools to interpret the information that was obtained in this study. Furthermore, the gathered data could be used for future research references to have a better understanding of how effective the crafted differentiated assessment tools for kindergarten in a public elementary school in the City of Meycauayan, Bulacan were.
Statement of the Problem
The general problem of this study is: “How may differentiated assessment tools for kinder with learning difficulties in numeracy be developed in public school classrooms?”
Specifically, this study seeks answers to the following questions:
How do teachers develop assessment tools for learners in terms of
- Considerations
- Types of Assessment Tools
- Diversity of Learners
How may the differences of the learners be described based on their performance?
How may the differentiated assessment tools in numeracy be developed, considering the learning difficulties of kinder in public school classrooms?
How may differentiated assessment tools in numeracy be evaluated by the teachers?
What are the assessment scores of the learners after the conduct of the pilot study?
Is there a significant difference in the assessment scores of the learners after the pilot study?
METHODOLOGY
Research Design
The research explored both qualitative and quantitative aspects, including interviews, classroom observations, and pre- and post-assessment data from the learners, to assess the impact of the differentiated tools. The researchers used mixed methods, particularly an exploratory sequential design, which combined qualitative and quantitative approaches to comprehensively address the research questions. According to a study made by Lee (2024), it revealed that mixed-methods research played a vital role in comprehending different kinds of data. It provided a deeper understanding and complexity to research inquiries, enhancing qualitative evaluations. Through the use of this method, the study aimed to provide a comprehensive understanding of how teachers developed, implemented, and evaluated assessment tools in kindergarten public school classrooms. The researchers of the study used interviews as a method of collecting qualitative data from the participants. The researchers interviewed Kindergarten Teachers to know how they developed assessment tools, focusing on considerations such as learners’ diversity, the types of assessment tools used, and strategies for learners who had learning difficulties. In the quantitative part, the researchers tested out new assessment tools in kindergarten classrooms in accordance with the suggested method of the cooperating teacher. After the pilot study, the researchers analyzed the scores to identify patterns and determined if the tools helped in reducing the performance gaps of the learners with learning difficulties.
Population and Sample or Participants of the Study
The target population of this research was the 45 kindergarten learners of a public elementary school classroom in a public elementary school in the City of Meycauayan, Bulacan. Since this study used mixed-method research designs, qualitative and quantitative data were used to identify the answers to the different statements of the problem. This methodology was valuable in areas where the experiences of teachers were key to grasping different concepts, making room for aiding in comparison. The study used probability sampling, particularly simple random for quantitative data in the pre- and post-test scores of the learners during assessment. Using both qualitative and quantitative data would consolidate and expand the resolution of the research problem. Since the target population of this research consisted of a diverse group of learners, simple random sampling ensured that there was no bias in selecting the participants in the study. This sampling method was also found effective by Xie (2024), where random sampling was found to work very well with varied sample populations since it guaranteed that every section had an equal or recognized chance of being selected, despite the diversity. These results were unbiased and valid statistical inferences that truly represent the traits of the whole target population. For qualitative data, expert purposive sampling was used since the three kindergarten teachers who were interviewed had already been identified, and their skills and knowledge were essential in gathering their opinions on the interview guide questions of the study. Their experiences and license in providing assessments to learners were enough qualifications to prove that they were qualified as the target population in this study. By using random sampling and random assignment, the sampling of the population for the study was acquired. The population was first chosen randomly from the three sections and then classified into skill groups by the kindergarten teachers. The researchers also sought help from the kindergarten teachers to learn about their learners’ different levels of progress before the study through the ECCD Checklist provided by the kindergarten teachers to ensure that every participant had an equal chance to participate and that there were no biases. The skill levels were pre-identified by the teacher with the following levels of learning: basic skills, intermediate skills, and advanced skills.
Research Instrument of the Study
This study utilized both qualitative and quantitative research instruments to address its objectives. The researchers conducted interviews with the kindergarten teachers of the public elementary school in the City of Meycauayan, Bulacan. This was the qualitative part of the study, which included Interview Guide Questions that inquired about the considerations that teachers made when formulating an assessment tool, how diverse the learners were, and the types of assessment tools used inside the classroom. To describe the differences among learners based on their performance, the ECCD Checklist of the learners was utilized. This provided the researchers with a better understanding of the skill levels of the learners who participated in the study, based on the evaluations prepared beforehand by the kindergarten teacher. In evaluating and validating the assessment tool, the study called for content validity, since the product that was produced was evaluated by the Master Teacher and the kindergarten teachers from a public elementary school in the City of Meycauayan, Bulacan, using LRMDS. The development of the product was also based on DepEd Order No. 8, s. 2015, titled “Policy Guidelines on Classroom Assessment for the K to 12 Basic Education Program.” Although there were no specific manuals for creating the assessment tools, this DepEd Order included the specific skills that needed to be assessed, which outlined the Cognitive Process Dimensions and their descriptors. The skills on the different levels of these dimensions were also identified. By using this guide, the researchers followed the school’s skill categories in creating the different types of assessments. For Skill Category A in the assessment tool, the cognitive process dimensions targeted were the skills under Remembering and Understanding, as these represented the lowest cognitive levels. For the Skill Category B in the assessment tool, the added cognitive process dimensions targeted were the skills under Applying and Analyzing. Finally, for Skill Category C in the assessment tool, the dimensions targeted included Evaluating and Creating, representing the highest level of cognitive process dimensions. As mentioned in the DepEd Order, these adapted Cognitive Process Dimensions could be used not only in lesson planning but also in the formulation of assessment tools and activities. After carefully formulating and evaluating the assessment tools, a pilot study was conducted with forty-five kindergarten learners in a public elementary school in the City of Meycauayan, Bulacan. To ensure reliability, an internal consistency method was used. Finally, after conducting the pilot study, the scores of the learners were evaluated and analyzed based on the three levels of assessment tools in the rubrics formulated by the researchers.
Data Gathering Procedures
The researchers initially requested permission through a formal letter addressed to the principal and kindergarten teacher in a public elementary school in the City of Meycauayan, Bulacan. Once permission to conduct the study was granted, the researchers informed the kindergarten teacher that they would be interviewed and sought their assistance in identifying the skill category levels of the learners. The interview was conducted and included questions regarding how teachers developed assessment tools for their learners. The sample questions were provided in the research instrument. The researchers also inquired about the current numeracy topic being taught to the learners, based on the fourth quarter Kindergarten Curriculum Guide. This served as the basis for the pretest and post-test of the differentiated assessment tools created by the researchers. For the pretest, a paper-and-pencil type of exam was used, which was based on Bloom’s Taxonomy. The exam was first evaluated by professional teachers with expertise in test construction. After revisions were made based on their feedback, the researchers administered the test to the kindergarten classroom. Following this, the researchers assessed the learners’ performance and identified those who had difficulties in numeracy based on their scores. Using the suggestions and insights gathered from the pretest results, the researchers developed differentiated tools in the form of flashcards, which included three distinct sets of activities related to numeracy, such as tracing numbers, recognizing numbers, and oral assessment. After these tools were created, a Master Teacher, Head Teacher, and Kindergarten teachers in a public elementary school in the City of Meycauayan, Bulacan, used the LRMDS evaluation tool to evaluate it. Based on their feedback, the tools were revised and then used by the learners who had learning difficulties, in order to determine if there was improvement in their performance. A modified rubric created by the researchers that was aligned to the ECCD checklist was used to assess learners’ performance. Throughout the data collection process, time constraints and ethical considerations were carefully taken into account.
RESULTS AND DISCUSSION
PART I. Teachers’ Approach to The Development of Assessment Tools
Considerations of Teachers on The Development of Assessment Tools for Learners
Many learners in the public-school setting had a hard time coping with the assessment tools that were usually distributed among them. Most of these assessment tools were a one-size-fits-all type of test, which often left learners with learning difficulties behind. The typical assessment tools have been around for a long time. Azahra et al. (2024) studied the importance of mathematics learning in early childhood numeracy skills development by using a number recognition approach and building learners’ critical thinking and problem-solving skills, which included the importance of the subject of mathematics in understanding the difficulty that learners face with numbers. According to Sasomo (2024), they concluded that differentiated learning not only met the individual needs of learners but also enhanced the overall learning experience in mathematics, improving learners’ engagement and achievement in mathematics assessments. It highlighted the essential factors for developing assessment tools that were specifically designed for kindergarten learners in public school classrooms, focusing on differentiated learning that supported the learners’ needs and enhanced their educational experiences.
With this, here are the insights of the kindergarten teachers from Calvario Elementary School regarding the development of assessment materials for classroom use:
TR 1- “Developing assessment tools for learners involved a structured approach to ensure that the tools were effective, aligned with the learning objectives, and provided valuable feedback.”
TR 2- “In measuring learners’ learning, I used various methods such as a checklist, observation, portfolio, and standardized tests.”
TR 3 – “As a teacher, assessment tools are essential instruments used to evaluate and measure knowledge, skills, performance, or competencies in educational and organizational settings. The first step in developing assessment tools was to define clear objectives. This involved identifying what to measure and ensuring that the objectives aligned with the overall goals of the training program or educational curriculum. Clear objectives helped focus the assessment tool on relevant outcomes, ensuring meaningful data collection. Then, after that, the appropriate assessment needed to be chosen for the learners.”
*TR – Teacher Response
According to Mahamud (2021), early childhood teachers were aware of learners’ challenges. However, there was a critical need for further training and resources to help them address these challenges effectively, alongside the importance of creating a supportive learning environment.
Here are the common answers of kindergarten teachers when considering the primary factors they take into account when developing assessment tools:
TR 1- “When developing assessment tools for learners, several primary factors need to be considered to ensure they are effective, fair, and aligned with the learning objectives. These factors included learning objectives and outcomes. Moreover, we also had to consider learner needs and characteristics, validity, reliability, fairness, clarity and transparency, the level of difficulty, variety of assessment methods, practicality and feasibility, feedback mechanisms, and ethical considerations. By carefully considering these factors, we could create assessment tools that were fair, effective, and conducive to meaningful, fun learning experiences.”
TR 2- “Assessment tools should have a clear objective, be easy to understand by learners, and be aligned with the target competencies. The appearance of the tool itself for a pen-and-paper assessment, such as the text and font style to be used, the font size, graphics, or images, needed to be taken into consideration for the accomplishment of the assessment.”
TR 3 – “When developing assessment tools for learners, it was essential to consider various factors to ensure the effectiveness and relevance of the assessments. These factors helped in aligning the assessment with learning outcomes, ensuring fairness, and providing meaningful feedback. We had to identify the learning outcomes and objectives of the exam as well as consider the different assessments that would be used.”
Teachers also needed to consider the effectiveness of the assessment tools, especially if they aligned with the learning objectives of the lesson. According to Goel et al. (2021), effective assessment of learning outcomes was essential in enhancing learners’ learning and ensuring that educational programs met their objectives. The alignment of assessment tasks with learning outcomes and the use of diverse assessment methods played a crucial role in this process. The importance of aligning assessment practices with clearly defined learning outcomes to enhance learners’ learning and improve educational effectiveness was crucial. The teachers were required to assess learners to identify learning outcomes.
Here are the responses from teachers regarding how they ensured that their assessment tools effectively measured the learning outcomes:
TR 1- “I considered these steps: Align assessment with learning objectives, use a variety of assessment methods, establish clear criteria by creating rubrics or scoring guides that clearly define how different levels of learners’ performance would be assessed, pre-test or pilot, provide feedback, and reflection. By following these steps, we could ensure that the assessment tools were not only effective but also comprehensive in measuring the intended learning outcomes.”
TR 2- “I monitored, reviewed, and improved my assessment strategies to maintain their effectiveness. This involved regular reflection on the alignment of the assessment methods that I used. The results of assessments should be used to inform instruction and make adjustments to my teaching strategies as needed. Effective assessment is not a one-time event but an ongoing process that informs both teaching and learning.”
TR 3- “As teachers, we had to ensure that our learners got 80 percent of the points in the given assessment. To ensure our assessment tools effectively measured the learning objectives, we should defined clear objectives, aligned assessments with those objectives, chose appropriate types of assessments, developed rubrics for objective evaluation, implemented feedback mechanisms, analyzed assessment data for insights into learners’ performance, and continuously improved our assessment practices based on the findings.”
Development of Assessment Tools for Learners in terms of Type of Assessment Tools
Teachers usually used materials that they had already used before, and they frequently used them because it has already been proven and tested from their years of experience in teaching. Most of these were also approved by experts in the field of education. Mascia et al. (2021) employed a systematic method for categorizing the participants of their study, enabling them to examine how various teaching techniques influenced the growth of numeracy abilities in kindergarten learners. It showed that pencil-and-paper or computerized training for kindergarten was one of the techniques that could enhance numeracy abilities. Similarly, here are the common responses of the teachers regarding the kind of assessment tools that they used most frequently:
TR 1- “The type of assessment tools that I preferred to use frequently depends on the learning context, subject matter, and specific goals of the assessment. The kinds of assessment tools I commonly used and the reasons why I preferred them were:
Quizzes and Tests. Quizzes allowed for guided feedback and were easy to administer, especially for assessing knowledge, recall, or comprehension.
Rubrics and scoring guides. These provided clear criteria for both learners and instructors, ensuring transparency on how their work was evaluated. Rubrics also gave learners specific feedback on different aspects of their tasks, which helped them understand their strengths and areas of improvement.”
TR 2- “Learners’ portfolios were one of the assessments that best suited kindergarten, as they were a collection of learners’ work that demonstrated their progress and growth. As a teacher, I could determine if specific assessments should be present. It could also collect learners’ learning and demonstrate specific evidence of growth in a variety of standards and content. Using portfolios was an excellent way to get learners involved in the assessment process and for me to authentically assess learner growth.”
TR 3- “I used these two: formative and quarterly assessments. Formative assessments were the most familiar assessments used throughout the learning process to monitor learners’ progress and provide ongoing feedback. I usually had a quiz every Friday to monitor their progress. (e.g., quizzes, discussions). Hence, quarterly assessments were high-stakes assessments that evaluated learners’ learning at the end of an instructional unit, usually given at the end of the quarter.”
Development of Assessment Tools in terms of the Diversity of Learners
Learners develop differently, which is why teachers need to consider different factors when making assessment tools to use inside the classroom. According to Mavidou et al. (2019), practical implications could help teachers create more effective and supportive learning for kindergarten, leading to better educational outcomes. Also, understanding the learners’ different interests, readiness, and learning styles made learning more engaging. The teachers needed to accommodate most of the learners, but as much as they wanted to, there were still limitations. Nonetheless, they still tried their best. Insorio (2024) tested how differentiated instruction affected the performance of learners from diverse backgrounds and found that differentiated instruction was a successful method in enhancing mathematics performance among various groups of learners. However, despite its success in improving learners’ comprehension in mathematics, it also poses challenges for teachers with supportive educational policies and resources.
With this, here are the responses of kindergarten teachers on how they accommodated their learners’ different learning needs and preferences when designing assessment tools:
TR 1- “Accommodating learners’ different learning needs and preferences when designing assessment tools was crucial to ensure fairness, inclusivity, and effectiveness. To tailor the assessment for diverse learners, we considered offering a variety of assessment formats, adjusting assessment timing, scaffolding assessments, using differentiated criteria, incorporating universal learning (UDL) principles, supporting learners with special needs, incorporating collaborative or peer-based assessments, providing clear and supportive instructions, offering ongoing formative assessments, and encouraging self-reflection and self-assessment. By considering various assessment formats, providing clear instructions, and accommodating specific learners’ needs, we could create a more accessible and supportive assessment environment. Flexibility and personalization were the key to ensuring that all learners, regardless of their backgrounds or abilities, had an equal opportunity to succeed and demonstrate their knowledge.”
TR 2- “Understanding the learners’ needs greatly impacted the success of the assessment. As a teacher, I had to ensure that the tool I was using could accommodate my diverse learners. Strategically planning the design to be used in the assessment enabled the learners to accomplish the assessment.”
TR 3- “There was no such thing as a perfect assessment for the learners. No assessment fits all. However, to effectively accommodate learners’ different learning needs and preferences when designing assessment tools, it was essential to first understand the diversity present in the classroom. This involved recognizing that learners came with varied backgrounds, abilities, and learning styles. Exploring how each learner best absorbed and processed the assessment was crucial to the tool’s success. By identifying whether a learner learned best through visual, auditory, kinesthetic, or other modalities, I could tailor my teaching methods to match these preferences.”
PART II. Differences in Learners’ Performance
Table 1. Performance of Learners based on Pre-Test
STUDENT NUMBER | SKILL CATEGORY SYMBOL | SKILL CATEGORY DESCRIPTION (Based on Bloom’s Taxonomy) | PERCENTAGE |
7, 13, 14, 21, 27 | A | Shows basic skills. (Remembering, Understanding) | 11.11% |
1, 17, 18, 19, 24, 41 | B | Shows intermediate skills. (Remembering, Understanding, Applying, Analyzing) | 13.33% |
2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 15, 16, 20, 22, 23, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45 | C | Shows advanced skills. (Remembering, Understanding, Applying, Analyzing, Evaluating, Creating) | 75.56% |
Table 1 shows the different ranges and categories of learners based on their performance during the pre-test of the study. The learners were categorized using the skill symbols A, B, and C, which represented the different levels of demonstrated skills, namely basic, intermediate, and advanced. This range was derived from the various levels of Bloom’s Taxonomy. Its influence on formulating assessment questions greatly affected learners’ performance by encouraging diverse cognitive abilities. By matching the categories to the hierarchy of taxonomy, teachers were able to encourage more profound comprehension and analytical thinking. This organized method not only improved the quality of assessment but also customized the learning experience to meet each learner’s diverse and unique needs. For Skill Category A, which consisted of 11.11% of the population, these were learners who employed basic skills of remembering and understanding. According to Kandpal (2023), most assessments that focused on basic skills often led to rote memorization rather than deeper comprehension. For Skill Category B, which consisted of 13.33% of the population, these were learners who possessed the skills of Remembering, Understanding, Applying, and Analyzing. The questions at these levels encouraged learners to use knowledge in practical scenarios, enhancing problem-solving skills (Ravichandran et al.2024). For Skill Category C, which consisted of 75.56% of the population, these were learners who possessed higher-order thinking skills that included all the skills of Bloom’s Taxonomy, namely, Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. The study of Kurian et al. (2024) emphasized that higher-order questions promoted critical thinking and creativity, which were essential for real-world applications. Categorizing learners by these levels also meant that questions formulated in the differentiated assessment tools were affected and significantly impacted learners’ outcomes. The study of Ravichandran et al. (2024) also indicated that utilizing a range of question types fostered diverse cognitive skills, leading to improved analytical abilities of learners. These advanced question classification methods ensured alignment with learning objectives in enhancing the reliability of assessments (Vankawala et al., 2023).
PART III. Development of Numeracy Differentiated Assessment Tools Considering Kindergarten with Learning Difficulties in Public School Classrooms
Figure 1. Process in Developing Differentiated Numeracy Assessment Tools (based on ADDIE Model of Instructional Design)
The process of developing numeracy differentiated numeracy assessment tools for kindergarten with learning difficulties follows the ADDIE Model of Instructional Design, which includes: Analysis through consultation with the kindergarten teachers regarding the current numeracy lesson being taught to the learners and a review of the Kindergarten Curriculum Guide. This ensured that the assessment was aligned with the learning objectives specified in the Kindergarten Curriculum Guide for the fourth quarter. Through consultation with the kindergarten teachers, the researchers could align the assessments to each learner’s distinct learning needs. Then it is followed by the Design process in which the researchers create a paper-and-pencil test based on Bloom’s Taxonomy that was validated by three professors from the College of Education and three teachers in a public elementary school in the City of Meycauayan, Bulacan, who had expertise in test construction to ensure its quality and appropriateness. Following this was the Development of differentiated tools to support struggling learners. In this study, flashcards containing a set of numeracy activities were created. These were made from chipboard, laminated for durability, and designed with appropriate font size, style, line, and pictures suitable for kindergarten learners. A vibrant-colored wooden storage box was also created to store the flashcards, making them more engaging and accessible. The development of these tools took 2 weeks to finish and was also validated by the teachers using the LRMDS evaluation tool. After this was the Implementation of the pre- and post-pilot testing in a kindergarten classroom, wherein the researchers used their created paper-and-pencil test during the pretest. Through this test, the researchers systematically assessed the kindergarten learners’ basic numeracy skills. This provided a structured method for assessing their understanding of numeracy concepts and identifying those learners with learning difficulties. The implementation of the pre-test took one week. On the other hand, the differentiated tools were used during the posttest to assess the impact of using differentiated tools on the learners with learning difficulties. This implementation took two weeks to complete. After the implementation part was the Evaluation of the scores during pre- and post-pilot testing, wherein the researchers then evaluated the performance of learners with learning difficulties to determine whether the tools had led to improvement. A modified rubric aligned with the ECCD checklist was used to measure the learners’ progress.
PART IV. Evaluation of the Assessment Tools as Evaluated by the Teachers
Table 2. Teachers’ Evaluation in Terms of Factor A (Content)
Indicators | Ave Rating | Descriptive Interpretation |
1. Content reinforces, enriches, and /or leads to the mastery of certain learning competencies for the level and subject it was intended for. | 4.00 | Very Satisfactory |
2. The material has the potential to arouse the interest of the target users. | 3.00 | Satisfactory |
3. Facts are accurate. | 4.00 | Very Satisfactory |
4. Information provided is up-to-date. | 4.00 | Very Satisfactory |
5. Visuals are relevant to the text. | 4.00 | Very Satisfactory |
6. Visuals are suitable for the age level and interests of the target user. | 4.00 | Very Satisfactory |
7. Visuals are clear and adequately convey the message of the subject or topic. | 4.00 | Very Satisfactory |
8. Typographic layout/design facilitates understanding of concepts presented. | 4.00 | Very Satisfactory |
9. The size of the material is appropriate for use in school. | 4.00 | Very Satisfactory |
10. The material is easy to use and durable. | 4.00 | Very Satisfactory |
TOTAL | 39.00 | Passed |
Table 2 shows the teacher’s evaluation of the manipulatives in terms of Factor A (Content). It generally indicated that the manipulative was formatively evaluated as “very satisfactory” in terms of content. This implied that the differentiated assessment tools made by the researchers met the requirements prescribed for manipulatives as to the content and received a rating of 39.00. According to Goel et al. (2021), effective assessment of learning outcomes was essential for enhancing learners’ learning and ensuring that educational programs met their objectives. Meanwhile, in the study of Mavidou et al. (2019), it was emphasized that understanding the different findings on learners’ interests, readiness, and learning styles was necessary to make learning more engaging for the learners. In addition, manipulatives were widely used tools in math classes and supported learners’ conceptual understanding of math content, according to a study by Karten and Murawski (2020).
Table 3. Teachers’ Evaluation in Terms of Factor B (Other Findings)
Indicators | Ave Rating | Descriptive Interpretation |
1. Conceptual errors. | 4.00 | Not Present |
2. Factual errors. | 4.00 | Not Present |
3. Grammatical and/or typographical errors. | 4.00 | Not Present |
4. Other errors (i.e., computational errors, obsolete information, errors in the visuals, etc.) | 4.00 | Not Present |
TOTAL | 16.00 | Passed |
Table 3 presents the evaluation of teachers on the manipulative in terms of Other Findings (Factor B). It further showed that conceptual errors, factual errors, grammatical and/or typographical errors, and other errors were “not present” as evaluated, and the manipulative received a perfect rating of 16.00. In the study of Azahra et al. (2024), the importance of mathematics learning in early childhood numeracy skills development was emphasized through the use of a number recognition approach and the development of critical thinking and problem-solving skills. To determine how well manipulatives improved learning outcomes, it was crucial to evaluate them in educational settings, as noted by Kim (2023). In addition, Guanzon-Pisaras et al. (2020) found that using math manipulatives significantly enhanced the numeracy skills of kindergarten pupils. Children, particularly those who enjoyed fun activities, were more engaged when using concrete tools.
Table 4. Teachers’ Evaluation in Terms of Factor C (Additional Requirements for Manipulatives)
Indicators | Ave Rating | Descriptive Interpretation |
1. Adequate support material is provided. | 4.00 | Very Satisfactory |
2. Activities are summarized; extension activities are provided. | 4.00 | Very Satisfactory |
3. Suggested activities support innovative pedagogy. | 3.00 | Satisfactory |
4. Manipulative is safe to use. | 4.00 | Very Satisfactory |
5. The size and composition of the manipulatives are appropriate for the intended audience. | 4.00 | Very Satisfactory |
6. Suggested manual tasks within the activities are compatible with the motor skills of the intended users | 4.00 | Very Satisfactory |
TOTAL | 23.00 | Passed |
Table 4 showed the evaluation of the teachers regarding the instructional and technical design of the manipulative. It was noted that the evaluators formatively assessed the differentiated assessment tools and rated them as “very satisfactory,” with a rating of 23.00 across the given indicators for instructional and technical design. This implied that the instructional and technical design of the manipulative met the given criteria.
In accordance with the summary of ratings of the evaluators across the areas for evaluation, the earned points for every factor had met the required minimum points. All of the factors and considerations described in the evaluation tool had been interpreted as “Passed.” This implied that the differentiated assessment tools had met the requirements prescribed by the Department of Education for Manipulatives. This showed that, to assess how effectively these tools had impacted learning outcomes, it had been essential to evaluate and meet the requirements prescribed by the Department of Education to determine their effectiveness in real educational settings.
PART V. Assessment Scores of the Learners Before and After the Conduct of the Pilot Study
Table 5. Score ranges of learners during the pretest and after the implementation of the study (post-test)
STUDENT NUMBER | SKILL CATEGORY SYMBOL | SCORE RANGE (PRE-TEST) |
7, 13, 14, 21, 27 | A | 10-15 |
1, 17, 18, 19, 24, 41 | B | 19-22 |
2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 15, 16, 20, 22, 23, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45 | C | 23-25 |
Comparing the results of both the pre-test and post-test of the learners, the table showed that there was an improvement between the scores before and after the treatment, from using only paper-and-pencil tests to using the crafted differentiated assessment tool. This was primarily evident in the scores of Skill Category Group A, which initially ranged from only 10–15 and eventually reached a range of 19–22 during the post-test. Skill Category Group B also demonstrated progress, with scores initially ranging from 19–22, increasing to 20–24 after the use of the assessment tool. Meanwhile, Skill Category Group C remained consistent in their scores, maintaining a range of 23–25. This indicated that there was a significant improvement when using the differentiated assessment tool compared to relying solely on the paper-and-pencil test. Although paper-and-pencil tests were also considered an effective method for assessing and improving learners’ numeracy skills (Mascia et al., 2021), a notable difference was observed in the scores after the implementation of the differentiated assessment tool. This supported the conclusion that mathematics manipulatives served as an effective resource for enhancing numeracy abilities in kindergarten learners, resulting in noticeable progress in their learning outcomes, as corroborated by the study of Guanzon-Pisaras et al. (2020).
PART VI. Significant Difference between the Assessment Scores of the Learners after the Pilot Study
Table 6. t-test for the significance value between pre-test and post-test scores of the students
Variables | T | Sig-value | Decision | Interpretation |
Pre-Test and Post-Test of Students | 4.78 | 0.00002 | Reject the null hypothesis (Ho) | There is a significant difference between the scores of the pre-test and post-test of students. |
By using the T-Test Calculator for 2 Dependent Means, the table showed the paired t-test results between the pre-test and post-test of the study. The value of t was 4.783066, and the value of p, or the significance value, was .00002. The result was significant at p < .05. This meant that the decision was to reject the null hypothesis. This indicated that there was a significant difference between the pre-test and post-test scores of the learners and, therefore, concluded that using differentiated assessment tools for learners with learning difficulties was effective in enhancing their performance during classroom assessments in numeracy. By monitoring Skill Category A, most learners with learning difficulties improved their scores. Alongside the successful implementation of the assessment tools, continuous professional development and access to resources for teachers were crucial in providing them with the necessary skills for applying differentiated assessments. Consistent updates on learners’ progress and evaluation results aided in improving teaching methods and increasing learners’ involvement (Arsyad & Suadiyatno, 2024).
CONCLUSION
According to the findings and interpretation of the study, the development of differentiated tools easily helped teachers identify learners with learning difficulties and offer the right kind of support, leading to a more accurate evaluation of the progress of learners in numeracy skills rather than traditional assessments. The learners with potential learning disabilities understand assessments and instructions better when manipulatives accompany them. There is a significant improvement in the assessment results from the pre-test to the post-test. A systematic procedure was followed in the development of these differentiated assessment tools, which highlighted the reason why adequate preparation of time was essential, as it enabled the development of a strong intellectual basis. The content of the activities that could be seen in this differentiated assessment was aligned with the Kindergarten Curriculum Guide in order for it to be timely, useful, and meaningful for learners. In addition, the opinions and recommendations of professionals involved in the creation of these tools were necessary since they offered better ways to improve and utilize them inside a classroom. However, the study was limited by its small sample size, which was mainly from only one school, and a short follow-up period.
RECOMMENDATIONS
Looking into the conclusions and summary of this study, the researchers would like to suggest the following:
- Future research about the in-depth study of considerations made by teachers in developing differentiated assessment tools for kindergarten in different localities;
- Discover new categorization methods for learners and how they could be classified based on their skill levels, other than using Bloom’s Taxonomy;
- Develop more differentiated assessment tools for numeracy other than manipulatives, wherein it could be in a new form, such as integrating the use of technology through different teacher trainings and seminars.
ACKNOWLEDGEMENT
This research paper was made possible through the invaluable support and guidance of our families, professors, friends, and research participants, to whom we extend our heartfelt gratitude. We are very grateful to our professor, Dr. Joseline M. Santos, as well as to Dr. Marilou S. Alcanar and the Kindergarten teachers of Calvario Elementary School, our critic professor and adviser, Mrs. Esther G. Domingo and Mr. Joel B. Faustino, and other institutions and individuals including Mr. Troy Aquino, Mr. and Mrs. Pangilinan, and the staff of Bulacan State University, for their contributions that significantly enhanced the quality of our study. Above all, we give thanks to our Almighty God for granting us strength, wisdom, and perseverance throughout the completion of this research.
REFERENCES
- Arsyad, Moh. A., & Suadiyatno, T. (2024). Differentiated Assessment in EFL Classroom in Indonesia: Prospects and Challenges. Journal of Language and Literature Studies, 4(2), 516–523. https://doi.org/10.36312/jolls.v4i2.1913
- Aylward, G. P. (2020). Conducting a Developmental Assessment in Young Children. Journal of Health Service Psychology, 46(3), 103–108. https://doi.org/10.1007/s42843-020-00015-0
- Azahra, Z., Siregar, A. F., Alfarisi, M., & Wandini, R. R. (2024). The importance of mathematics learning in developing early childhood numeracy skills. Journal Prinsip Pendidikan Matematika, 7(1), 80–87. https://doi.org/10.33578/prinsip.v7i1.227
- Cagasan, L., Care, E., Robertson, P., & Luo, R. (2020). Developing a Formative Assessment Protocol to Examine Formative Assessment Practices in the Philippines. Educational Assessment, 1–17.https://doi.org/10.1080/10627197.2020.1766960
- Goel, N., Deshmukh, K., Patel, B. C., & Chacko, S. (2021). Tools and rubrics for assessment of learning outcomes. In Advances in educational technologies and instructional design book series (pp. 211–254). https://doi.org/10.4018/978-1-7998-4784-7.ch013
- Guanzon-Pisaras, G. F. (2020). Mathematics Manipulatives for the Development of Numeracy Skills of Kindergarten Pupils. 7(1), 1. https://doi.org/10.18868/SHERJ7J.07.010120.07
- Insorio, A. O. (2024). Addressing student diversity to improve mathematics achievement through differentiated instruction. International Journal of Professional Development, Learners and Learning. https://doi.org/10.30935/ijpdll/14462
- Keary, A., Garvis, S., Zheng, H., & Walsh, L. (2020). “I’m learning how to do it”: reflecting on the implementation of a new assessment tool in an Australian Early Childhood. International Journal of Inclusive Education, 26(13), 1–15. https://doi.org/10.1080/13603116.2020.1803428
- Kandpal, M. (2023). Assessment Techniques for School Teachers. International Journal of Science and Research. https://doi.org/10.21275/sr23919115802
- Karten, T. J., & Murawski, W. W. (2020). Co-Teaching Do’s, Don’ts, and Do Betters. ASCD.ERIC – ED606323 – Co-Teaching Do’s, Don’ts, and Do Betters, ASCD, 2020
- Kim, D. H. (2023). Educational interventions involving physical manipulatives for improving children’s learning and development: A scoping review. Review of Education, 11(2).https://doi.org/10.1002/rev3.3400
- Langston, A. (2020, October 27). What is an Inclusive Classroom? And why is it Important? ViewSonic Library. https://www.viewsonic.com/library/education/what-is-an-inclusive-classroom-and-why-is-it-important/
- Lee, Y. S. (2024). Qualitative and mixed methods. Elsevier EBooks, 229–232. https://doi.org/10.1016/b978-0-323-85663-8.00010-6
- Mahamud, W. (2021). Early childhood assessment and observation of educators’ knowledge of learners in Sissala East Municipal, Ghana. Early Childhood Assessment and Observation of Educators’ Knowledge of Learners in Sissala East Municipal, Ghana, 3(2), 24. (Original work published 2021) https://royalliteglobal.com/african-studies/article/view/533//
- Mascia, M. L., Agus, M., Fastame, M. C., & Penna, M. P. (2021). The Enhancing of Numeracy Skills Through Pencil-and-Paper or Computerized Training for Kindergarteners (pp. 3–18). Springer, Cham. https://doi.org/10.1007/978-3-030-65657-7_1
- Mavidou, A., & Kakana, D. (2019). Differentiated Instruction in Practice: Curriculum Adjustments in Kindergarten. Creative Education, 10(3), 535–554. https://doi.org/10.4236/CE.2019.103039
- P, B. P., & Kurian, C. (2024). Generation of Bloom’s taxonomy-based complex-level questions using a knowledge graph. 2024 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES), 1–6. https://doi.org/10.1109/spices62143.2024.10779773
- Ravichandran, K., & B A. V. (2024). Bloom’s Taxonomy Categories in the Economy of Literature Teaching-Learning Process. International Research Journal of Multidisciplinary Scope, 05(03), 721– 727.https://doi.org/10.47857/irjms.2024.v05i03.0827
- Sasomo, B. (2024). Differentiated and enjoyable learning to facilitate mathematics subject assessment. Journal Theorems (The Original Research of Mathematics). https://doi.org/10.31949/th.v9i1.8088
- Sibhekinkosi Anna Nkomo, & Erasmos Charamba. (2022). Inclusive Formative Assessment for Diverse Pre-Service Foundation Phase Literacy Teachers. IGI Global EBooks, 96–116. https://doi.org/10.4018/978-1-7998-8579-5.ch005
- Vankawala, S., Thakkar, A., & Bhatt, N. (2023). Advanced Educational Assessments: Automated Question Classification Based on Bloom’s Cognitive Level. 1–6. https://doi.org/10.1109/easct59475.2023.10392329
- Xie, G. (2024). Random sampling is a mathematical necessity beyond debate or opinion for valid statistical inferences. https://doi.org/10.31235/osf.io/xswv4
- Zhou, M. (2023). Significance of Assessment in Learning: The Role of Educational Assessment Science Insights Education Frontiers, 18(2), 2881–2883. https://doi.org/10.15354/sief.23.co215