Students in Blended Learning: Their Preference of Assessment in English Courses
- Alfonso Samillano Jr.
- Andy Bon Dariagan
- 324-331
- Mar 10, 2025
- Education
Students in Blended Learning: Their Preference of Assessment in English Courses
Alfonso Samillano Jr.1, Andy Bon Dariagan2
1University of Antique
2Capiz State University
DOI: https://doi.org/10.51584/IJRIAS.2025.10020027
Received: 07 February 2025; Accepted: 11 February 2025; Published: 10 March 2025
ABSTRACT
Blended learning has become persistent in higher educational institutions in the Philippines several years after the pandemic happened. With this modality of learning as an option when situations emerge where face-to-face meetings are not possible, the concern among teachers and students alike is how the assessment is implemented. In this study, the researchers aimed to determine the preference of students taking English courses among the assessment tools categorized as traditional and alternative assessments. Their actual scores (performance) in both traditional and alternative assessments were compared to their scores in the questionnaire (preference). Using mean and standard deviation, the preference on assessment tools and the actual scores using these assessment tools were gathered. It appears that the students prefer both traditional and alternative assessments. However, they highly prefer alternative assessment tools such as portfolio and performance tests. It was also revealed that their actual scores in the tests using both traditional and alternative assessments are high. Using Spearman’s rho, the relationship between students’ preference and performance with both alternative and traditional assessment were tested. It was found that students do not have a general preference for the type of tests. There was no relationship found between their preference of traditional assessment tools and their actual scores; however, a significant relationship was found between their preference of alternative assessment and actual scores.
Keywords: assessment of learning, traditional assessment, alternative assessment
INTRODUCTION
The global lockdown of education institutions caused major and likely unequal interruption to students’ learning. Due to the pandemic restrictions, many schools and institutions strive to create a student-centered, flexible, remote learning environment. This called for more student ownership and flexibility not only in the ‘input’ but also in the assessment process – from what and when to assess, to the use of feedback, by educators to inform their next steps and by students to reflect upon their learning (Puri, 2022). The need for flexible assessment is important within inclusive education, to allow students to have a valued ‘voice and choice’ in the classroom dynamics, to plan and modify their own learning journey to identify what has been learned and what needs to be learned next.
Even after the pandemic, Philippine higher education institutions did not fully return to a hundred per cent face-to-face (f2f) instruction. The schools still use blended learning as an option whenever situations arise where f2f meetings are not possible. To monitor students’ learning in the blended learning modality, teachers were asked to devise novel and different methods, United Nations Children’s Fund (2020). Different assessment methods were required and these methods are generally categorized as traditional and alternative assessments.
Traditional assessments are tests that gauge knowledge and comprehension by having students “recall facts and concepts, measure their own improvement over time, comprehend and restate ideas, and articulate knowledge” (Seneca College et.al., 2022). These testing techniques include multiple choice questions, fill-in-the-blanks, true-false, matching, and essays.
According to Bailey (1998) as stated by Dikli (2003), traditional assessments are indirect and inauthentic. She also adds that traditional assessment is standardized and for that reason, they are one-shot, speed-based, and norm-referenced.
Law and Eckes (1995), Dikli (2003) emphasized the same issue and stated that traditional assessments are single-occasion tests. That is, they measure what learners can do at a particular time. However, test scores cannot tell about the progression of a child. Furthermore, they cannot tell what particular difficulties the students had during the test. Similarly, Smaldino et al. (2000) stated that traditional assessments often focus on the learner’s ability of memorization and recall, which are lower levels of cognition skills.
Whereas, alternative assessments develop students’ higher order thinking skills by designing more realistic problems for them. Exam Soft (2021) categorized alternative assessment into three, namely: authentic assessment, constructivist assessment, and performance-based assessment. The two common alternative assessment techniques are portfolios and projects (Dikli, 2003) .
Alternative assessments assess higher-order thinking skills. Students have the opportunity to demonstrate what they learned. This type of assessment tools focus on the growth and the performance of the student. That is, if a learner fails to perform a given task at a particular time, they still have the opportunity to demonstrate their ability at a different time and different situation. Since alternative assessment is developed in context and over time, the teacher has a chance to measure the strengths and weaknesses of the student in a variety of areas and situations (Law and Eckes, 1995 as stated by Dikli, 2023).
The traditional teacher-centered approach to assessment places teachers in total control of what, how, and when students’ learning is assessed. Alternatively, choice-based assessment is a learner-centered approach to assessment that allows students to choose, to some extent, what, how, and/or when their learning is assessed (Spinney, 2023).
In the post pandemic educational setting in the Philippines, the Commission on Higher Education maintained a flexible method of teaching-learning activities. CHED Memorandum Order No. 4 series of 2020 outlined the Guidelines on Implementation of Flexible Learning which suggests the use of “various means of Delivery and assessment of learning… to show the achievement of the set learning outcomes for each course for the program.”
As higher education institutions in the Philippines employ blended learning and other modalities, the assessment methods should also vary. The preference of students should also be considered. Despite the availability of various modalities provided by colleges, students aspire to pursue online learning (Illescas, 2023). As students’ preference for learning are leaning towards a more flexible set up, it is necessary to know what type of assessment they prefer as well. This will gauge teachers’ decisions in selecting assessment methods for them.
In the light of the ideas discussed above, the researchers intend to determine the preference of students in terms of assessment while blended learning is adopted as a learning modality to establish a basis for the administration of classroom assessment.
Statement of the Problem
This study aimed to determine the students’ preference between alternative and traditional assessment methods during the blended learning modality of teaching and learning in the classroom and compare this data with their actual scores in those assessment types. Specifically, this sought to answer the following questions:
- What is the preference of students among the assessment tools utilized in blended learning?
- What is the performance of students in traditional versus alternative assessment?
- Is there a significant relationship between the preferred assessment tools and students’ actual scores?
Hypothesis
Based on the above stated problems, the following hypothesis was considered.
There is no significant relationship between the preferred assessment tools and students’ actual scores.
MATERIALS AND METHODS
Research Design
This study utilized a correlational research design to investigate relationships between variables without the researcher controlling or manipulating any of them. It covered a total population of 27 third year college students who are taking Bachelor of Secondary Education major in English at Capiz State University, Pontevedra Campus. The researchers believed that this group of students would be able to provide the necessary data for the study because they are experiencing both the face-to-face and blended learning modalities, as well as the utilization of both traditional and alternative assessment methods. It was presumed that this group of students would be able to differentiate, compare, and choose the assessment method they believed and preferred to be useful, applicable, and effective for them.
This study underwent three stages: 1) Gathering of data with the use of an adopted and modified questionnaire; 2) Integrating the test scores for alternative assessment tests and traditional assessment tests; and 3) Correlating and analyzing the results.
Instrumentation
This study utilized two tools: a questionnaire and the actual scores of students in both traditional and alternative assessment. The questionnaire used for the study was adopted and modified from Phongsirikul (2018). This questionnaire collected the data on students’ preference from all assessment types.
The actual scores in English courses were gathered from the teachers and these scores were the results of both alternative and traditional assessments. The data were transmuted with a grade ceiling of 100.
To facilitate the interpretation of the computed mean for the preference, the researchers adopted and modified the likert scale used by Bañes (2016). The following mean ranges with their corresponding interpretations were used: 3.51-4.00: Highly Preferred (HP); 2.51-3.50: Preferred (P); 1.51-2.50: Least Preferred (LP); 1.00-1.50: Not Preferred (NP).
To facilitate the interpretation of students’ performance the following categories with their description were used: 91-100- High; 81-90- Average; 71-80- Low; 70 and below-Very Low.
Data Collection
Seeking approval from the concerned offices to administer the questionnaire to the target respondents started the data gathering stage. The researchers distributed the copies of the questionnaire personally to the 27 respondents and clarified the process and the items for their convenience. The actual scores (performance) of both traditional and alternative assessments were requested from their teachers with the approval of the college dean and the students themselves.
The accomplished questionnaires (preference) were immediately retrieved. The answers made by the respondents in the questionnaire administered were subjected to tabulation and scoring.
The data gathered were processed and analyzed using the Statistical Package for Social Sciences (SPSS) Software. The following were the appropriate statistical tools used in this study: Frequency count, percentage, mean, Spearman rho.
The researchers interpreted and analyzed the results of the study to provide answers to the specific objectives. Results were also the ground for the researchers to establish their recommendations.
This study is only limited to the responses made by the respondents in the administered questionnaire and the acquired results of their performances in the different test types.
RESULTS AND DISCUSSION
The results of the survey revealed (see table 1) that students “Preferred” both the traditional and alternative assessment tools, M=2.99 and M=2.94, respectively. But it can be observed that alternative assessment tools have a slightly higher mean score. This is due to the mean score of Portfolio and Performance, M=3.22 and M=3.11, which are both alternative assessment tools.
This generally implies that in the different assessment tools, regardless if they are traditional or alternative, the students’ overall preference does not vary significantly. They view and perceive these assessment tools in the same manner to test their acquired learning. However, the students view the use of Portfolio and Performance as their highly preferred assessment tools to be utilized in assessing their learning.
This supports the study of Phongsirikul (2018) which revealed that students rated portfolio as the second most liked tool and stated that it was a useful method for the students’ learning process, to review their lessons, and prepare themselves for the exams throughout the semester. In addition, research (Darling-Hammond & Adamson, 2014) indicates that students who are engaged in completing performance tasks and portfolios that require reflecting on and revising their work ultimately perform better on higher-order thinking measures (e.g., synthesis, analysis, critical thinking, and communication) and demonstrate stronger growth mindsets.
Oral tests and the other modes of alternative assessment, i.e. computerized tests and portfolios, are not amongst the students’ preferences according to the results of the study of Wateribf et al. (2008).
Overall result differs from the study of Phongsirikul (2018) which revealed that students manifested a higher preference for traditional assessment. However, in a study conducted by Irawan (2017), it showed that the students perceived alternative assessment as a better tool for measuring their acquired learning. Thus, the preference depends on a specific group of students and this may be influenced by unique factors present in their own context.
Variety in assessment is undoubtedly a virtue. Even for similar learning objectives, there are a number of compelling reasons to evaluate in more than one way in order to ascertain a sound measurement and to maintain the development of a robust understanding (Mazzeo et al, 1993 as mentioned by Nasab, 2015).
Table 1. Preference on assessment tools
Assessment Tools | Mean | Standard Deviation | Description |
Alternative Assessment | 2.99 | .49280 | Preferred (P) |
Portfolio | 3.22 | .847 | Highly Preferred (HP) |
Performance | 3.11 | .847 | Highly Preferred (HP) |
Collaborative Works | 2.89 | .847 | Preferred (P) |
Oral Reports | 3.00 | .734 | Preferred (P) |
Interviews | 2.70 | .724 | Preferred (P) |
Traditional Assessment | 2.94 | .92334 | Preferred (P) |
Quizzes | 2.96 | .940 | Preferred (P) |
Major Exams | 2.93 | .997 | Preferred (P) |
The data revealed (see table 2) that the performances of students based on their test scores for both traditional and alternative assessments are “High,” M= 91.13 and 93.87, respectively.
This implies that regardless of the assessment tools used for blended learning, students can perform well and can achieve high scores. Furthermore, students might already have mastered the techniques and knowledge of how these assessment tools are utilized to measure what they learned.
Although both assessment tools obtained the same adjectival rating in the actual scores of students, it can be noted that alternative assessment tools have a slightly higher mean (SD=.59731). This implies that some students may have scored higher in alternative assessments than in traditional assessment.
The results in the study of Krawczyk (2017) also showed that while the alternative assessment model did not have a direct impact on students’ daily engagement or intrinsic motivation, it did increase students’ understanding of how their work correlated to a final grade in the unit, and it created opportunities for students to make connections to their learning and thus more actively plan their future work. This supports the possibility that students achieved high performance in the utilization of alternative assessments because of the different factors including the students’ understanding of the effect of alternative assessment methods with bearing in their final grades, just like how the traditional assessment is viewed to be the basis of students’ academic performance.
Table 2. Actual performance of students in traditional and alternative assessment
Assessment Tools | Mean | Standard Deviation | Description |
Traditional Assessment | 91.13 | 1.78989 | High |
Alternative Assessment | 93.87 | .59731 | High |
The data in Table 3 showed that there is no significant relationship between the students’ actual performance and preference on traditional assessment tools utilized to measure their learning (P= .13). This implies that though they “Preferred” the utilization of traditional assessment, this does not directly influence how they score in these tests. They may score lower or higher on these tests depending on the topics covered and their retention and understanding of these topics.
This corroborates with the results of the study of Watering et al. (2008), wherein no relationship was found between students’ perceptions of assessment and their assessment scores.
Table 3. Relationship between students’ preference and performance on traditional assessment
Correlations | ||||
preference | performance | |||
Spearman’s rho | preference | Correlation Coefficient | 1.000 | -.296 |
Sig. (2-tailed) | . | .133 | ||
N | 27 | 27 | ||
performance | Correlation Coefficient | -.296 | 1.000 | |
Sig. (2-tailed) | .133 | . | ||
N | 27 | 27 |
Table 4 showed that there is a significant relationship between the students’ preference and performance on alternative assessment (P= .01). This implies that their “Preferred” general response to alternative assessment influences their test scores. Moreover, this suggests that when students prefer the kind of testing used in the classroom, they are likely to score better than in those assessment tools they do not prefer.
In fact, Scouller (1998), as cited by Watering et al., (2008) investigated the relationships between students’ learning approaches, preferences, perceptions and performance outcomes in two assessment contexts: a multiple choice question examination requiring knowledge across the whole course and assignment essays requiring in-depth study of a limited area of knowledge. The results indicated that if students prefer essays, this is more likely to result in positive outcomes in their essays than if they prefer multiple choice question examinations.
Table 4. Relationship between students’ preference and performance on alternative assessment
Correlations | ||||
preference | performance | |||
Spearman’s rho | preference | Correlation Coefficient | 1.000 | -.472* |
Sig. (2-tailed) | . | .013 | ||
N | 27 | 27 | ||
performance | Correlation Coefficient | -.472* | 1.000 | |
Sig. (2-tailed) | .013 | . | ||
N | 27 | 27 | ||
*. Correlation is significant at the 0.05 level (2-tailed). |
CONCLUSIONS
Based on the results of the study, the following conclusions were drawn:
- Students are likely to prefer any kind of assessment tools and methods, whether they are traditional or alternative assessment. Whatever the teachers may utilize, students do not show a general impression to dislike any kind of assessment methods. However, the use of portfolio and performance assessment are seen to be highly preferred by students. Their exposure to these tools of assessment made them aware how these are utilized as forms of assessment, which in turn made a great deal as to their preference. This may suggest that students may want to have more freedom and creativity to their response to tests.
- Students can manifest positive learning outcomes when they are tested using traditional assessment tools. It follows that quizzes and major examinations are effective assessment tools.
- Students have a more liking for alternative assessments and this is manifested by their slightly higher scores. This shows that students enjoy more freedom and creativity in their responses to tests while these tests are still effectively showing the learning outcomes that were achieved.
- With the blended learning as an option for learning modality, alternative assessments can supplement the evaluation of students’ learning in cases that traditional assessment is not feasible.
RECOMMENDATIONS
Based on the conclusions above, the following are suggested:
- Teachers may consider using varied assessment methods inside the classroom. This will test students’ higher order thinking skills and would allow them to explore and navigate how to apply their learning with the use of different assessment methods in measuring their takeaways to the subjects taught.
- Teachers should still view the significance of traditional assessments even if some classes are held online. Although the blended learning modality allows teachers to have more freedom to choose their assessment methods, the traditional assessments are indispensable as they measure what alternative assessments cannot cover. There are tools on the internet that allow teachers to use quizzes and examinations without doubting the integrity of student scores if cheating or tampering is an issue.
- Teachers should utilize alternative assessment to test learning objectives that require students to be creative, expressive, and resourceful. In fact, more alternative assessments should be employed in the classroom to see how students can go beyond what traditional assessments can only test.
- Other studies may be conducted considering more variables and goals with this study as reference.
Contribution of Authors
The collaborative efforts throughout this journey were instrumental in the success of this study. While the authors were at the forefront of conceptualizing, designing, data collection, and analysis of data, many experts in the field of education provided inputs to enrich the content of this paper.
Funding
This study did not receive grants from any funding agency.
Conflict of Interests
The authors declare no conflict of interests about the publication of this paper.
REFERENCES
- Bañez, R. M. 2016. Recency or Relevance: A Quest for Pedagogical Framework in Teaching Philippine and World Literature in Senior High School. Asia Pacific Journal of Multidisciplinary Research, Vol. 4, No. 4.
- Beller, M., & Gafni, N. (2000). Can item format (multiple choice vs. open-ended) account for gender differences in mathematics achievement? Sex Roles: A Journal of Research, 42, 1–21.
- Darling-Hammond, L., & Adamson, F. (2014). Beyond the Bubble Test: How Performance
Assessments Support 21st Century Learning. San Francisco, CA: Jossey-Bass; Guha, R., Wagner, T., Darling-Hammond, L., Taylor, T., & Curtis, D. (2018). The promise of performance assessments: Innovations in high school learning and college admission. Palo Alto, CA: Learning Policy Institute. - Dikli, S. 2003. Assessment at a distance: Traditional vs. Alternative Assessments. The Turkish Online Journal of Educational Technology – TOJET July 2003 ISSN: 1303-6521 volume 2 Issue 3 Article 2
- Exam Soft (2021, November 24). 3 Alternative Assessment Types and How to Use Them. Exam Soft by Turnitin. Retrieved June 25, 2023, from https://examsoft.com/resources/3-alternative-assessment-types/
- FormPlus, (2023). Alternative Assessment: Definition, Types, Examples & Strategies. tps://www.formpl.us/blog/alternative-assessment
- Illescas, M. K. A., Ong, A. K. S., & German, J. D. (2023). Online or Traditional Learning at the Near End of the Pandemic: Assessment of Students’ Intentions to Pursue Online Learning in the Philippines. Sustainability, 15(8), 6611. MDPI AG. Retrieved from http://dx.doi.org/10.3390/su15086611
- Gardner, B. M. & Alford, K. (2020). Using Interviews to Assess and Mentor Students. Higher Ed Teaching Strategies From Magna Publications.
- Irawan, M. O. (2017). Students’ [perceptions on traditional and alternative assessment. State Islamic University of Ar-Raniry. Darussalam-Banda Aceh. Brunie.
- Law, B. & Eckes, M. (1995). Assessment and ESL. Peguis publishers: Manitoba, Canada
- Nasab, F. G. (2015). Alternative versus Traditional Assessment. Journal of Applied Linguistics and Language Research Volume 2, Issue 6, 2015, pp. 165-178.
- Phongsirikul, M. 2018. Traditional and Alternative Assessments in ELT: Students’ and Teachers’ Perceptions. ERIC. rEFLections, v25 n1 p61-84.
- Puri, D. 2022. Learning during the pandemic – flexible assessments in online teaching. https://www.teachermagazine.com/in_en/articles/learning-during-the-pandemic-flexible-assessments-in-online-teaching
- Scouller, K. (1998). ‘The influence of assessment method on students’ learning approaches: Multiple choice question examination versus assignment essay. Higher Education, 35, 453–472.
- Seneca College, Humber College, Kenjgewin Teg, Trent University, and Nipissing University. (2022). Designing and developing high-quality student-centred online/hybrid learning experiences. Open Library. Retrieved [month] [day], [year], from https://ecampusontario.pressbooks.pub/qualitycourses/
- Simonson M., Smaldino, S, Albright, M. and Zvacek, S. (2000). Assessment for distance education (ch 11).Teaching and Learning at a Distance: Foundations of Distance Education. Upper Saddle River, NJ: Prentice-Hall.
- Smith, M. U. & Southerland, S. A.(2023). Classroom Assessment Techniques:Interviews. The National Institute for Science Education; College Level One Team. http://archive.wceruw.org/cl1/flag/cat/interviews/interviews1.htm
- Spinney, J. E. L. 2023. Students’ Perceptions of Choice-based Assessment: A Case Study. 23 No. 1 (2023): Journal of the Scholarship of Teaching and Learning
- United Nations Children’s Fund. (2020, July 31). Guidance: Assessing and monitoring learning during the COVID-19 crisis. Humanitarian Response.
- Watering, G. V. D., Gijbels, D., Dochy, F., & Rijt, J. V. D. (2008). Students’ assessment preferences, perceptions of assessment and their relationships to study results. Springer Link.