International Journal of Research and Innovation in Social Science

Submission Deadline- 11th September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-03rd October 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Psychometric Properties Of The Multiple Choice Mathematics Items Of Junior Secondary School Certificate Examination In Ekiti State

  • OJO, Amos Adewale (Ph.D)
  • OLOFIN, Samuel Oluwaseyi (Ph.D)
  • 7055-7068
  • Sep 30, 2025
  • Education

Psychometric Properties of the Multiple Choice Mathematics Items of Junior Secondary School Certificate Examination in Ekiti State

OJO, Amos Adewale (Ph.D)1, OLOFIN, Samuel Oluwaseyi (Ph.D)2

1Department of Science Education, School of Science Education, Bamidele Olumilua University of Education, Science and Technology, Ikere Ekiti, Nigeria

2Department of Science Education, Faculty of Education, Ekiti State University, Nigeria

DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0522

Received: 20 August 2025; Accepted: 27 August 2025; Published: 30 September 2025

ABSTRACT

The study examined the psychometric properties of the multiple choice Mathematics items of Junior Secondary School Certificate Examination in Ekiti State. Specifically, the study examined: the validity and reliability of the items of Junior Secondary School Certificate Mathematics Examination; the difficulty index of the items; and the discriminating power of the items. The design used in this study was ex-post facto-design. The data for this study were responses of the students to multiple-choice mathematics items of the junior secondary school certificate examination of Ekiti state. The population comprised of all the Junior Secondary School III students in Ekiti state that wrote 2022/2023 Junior Secondary School Certificate Examinations of Ekiti State. The sample for the stud consisted of 600 Junior Secondary School III students from public and private schools in Ekiti State, the sample was selected using multi stage sampling procedure.  The instrument used for this study was the Ekiti State Ministry of Education JSCE multiple-choice objective test items in Mathematics for 2022/2023 session consisting of fifty (50) items and the students’ Optical Mark Reading – OMR (answer sheets). The items were believed to have been developed, moderated, validated, and used by Ekiti State Ministry of Education for JSSCE. The fifty multiple choice JSCE III mathematics items of 2022/2023 session were assumed to have been validated and moderated by the Ekiti State Ministry of Education before they were administered to the respondent. The data collected was analyzed using difficulty index and discriminating power. The Kr-20 was used to determine the internal consistency of the instrument. Based on the findings of the study, the items were moderately valid and internally consistent. However, the items maintained different difficulty level, and discriminating power. It is recommended that JSCE multiple-choice objective test items in Mathematics be subjected to psychometric analysis before administering it on the students.

Keywords: Validity, Reliability, Difficulty Index, Discriminating Power, Mathematics Test Items

INTRODUCTION

As good as the method of generating items looked, many educators seem not to understand the meaning of standardization of questions, they take it to mean the types of textbooks used, the caliber of the people that set the questions and the number of times the questions are vet by professionals. The various questions that deserve urgent answers are: ‘How valid and reliable are these items that are widely generated? Is it not that the questions are set and selected based on the interest of individuals examiner setting and selecting the items? Is normal process for construction of items properly followed? Is the use of table of specifications employed? Are the test items standardized? Answers to these questions are not farfetched as it is likely that, many of the people involved in this assignment are not specialist in the area of tests construction. So it is said that, the tests may not be standardized.

Junior Secondary School Certificate Examination are standardized for different user of the test, since there is test manual and other accessory materials used as a guide for administering and scoring of the test. The candidates are graded by the Ministry of Education, hence the test items are standardized because it is not constructed by classroom teachers and all examinees are expected to answer the same test items from a common bank of question. Junior Secondary School Certificate Examination happens to be the examination which is administered to students moving from the ninth year of the basic education class to senior secondary school due to the new system of education 9 – 3 – 4. Junior Secondary School Certificate Examination is conducted for all the examinees in their third year of the Junior Secondary School in the state. It is constructed and administered by the State Government (Ajeigbe & Afolabi, 2014)

The performance of students in Mathematics is disturbing or alarming this day. Mathematics has always been the major building block at all level of life. This is also considered as an important or integral part of the society. Hence, being successful in Mathematics is one of the prerequisite for admission into all courses in the university and even not only related courses alone. Mathematics serve mostly all other branches of science like Physics, Chemistry, Social Studies, Medicine, Agriculture, Geography etc. The society needs mathematics to deal with technological processes. In fact, it is the main subject that everybody needs. This made it a serious concern for every to ensure that every examinees achieve success in Mathematics. Despite all benefits of mathematics most students see it as their least favorite subject which made it difficult for most students to succeed in school and their admission into tertiary institution

Having recognize the important of these tests instrument it is crucial to ensure that any test instruments constructed and administered by the examination bodies should be able to assess examinees in a uniform manner. Junior secondary school certificate examination item cannot be left out because is also consider as standardized test too since the examination are not teacher made tests. So JSSCE items are expected to have high technical qualities and should have undergone rigorous and extensive process of standardization such as objectivity, validity, reliability, difficulty level and discriminating power, but it seems JSSCE has gone through all these processes. Evidence from examiner report have shown that the performance of students in their WEAC and NECO especially in Ekiti State have been fluctuating between 62% and 88% compare to the outcome of their JSSCE result which is always between 80% and 94%.

Psychometric properties of tests refer to certain attributes inherent in the test items upon which an assessment of candidates is based. These properties include the difficulty indices, the discriminating index, the power of distractors, validity and reliability indices. It is perhaps worth mentioning that these attributes of a test seems most often ignored and when this occurs, the items will not measure what it is supposed to measure correctly.

Process of tests construction is based on validity, reliability, item analysis which includes difficulty level, item discriminating factors and distractor index. When setting the questions the three domains of learning must be covered which are the cognitive, affective and psychomotor. The researcher observed that some test items may not cover the syllabus while some test items may be too difficult or simple thereby compromising its difficulty level. Also, some test items do not discriminate, that is, not separating between high and low ability students. Some test items do not have good distractors, thereby making the options not plausible.

Test items are indispensable tools in the evaluation of students’ achievement at school. According to Kolawole and Olofin (2018), item analysis (difficult and discriminating indices) is concerned with ascertaining the worth of the test items. Item analysis is based on the responses to individual items. They further considers Item analysis as probably the most important tool to increase test effectiveness. It is a scientific way of improving the quality of tests, and test items in an item bank. An item analysis provides three kinds of important information about the quality of test items. Item difficulty, which measure whether an item is too easy or too hard, also called facility index. Item discrimination measures whether an item discriminates between candidates who know the test well and candidates who do not. Effectiveness of alternatives determines whether distractors (incorrect but plausible options) tend to be chosen by the less able students and not by the more able candidates. It is therefore observed by the researcher that responses of students to each items of the examination seems not to be  considered to know whether the items are good/bad, whether the items discriminate correctly between the bright/dull, and whether the incorrect options are chosen by the less able/more able candidates.

Moreover various studies have been carried out on causes and remedies of poor academic performance such as Ige and Ogunleye (2016), Samer and Mohammad (2015), Manizheh (2016), therefore a lot need to be done on psychometric analysis of these examinations. Since test properties affects result of examinees in terms of the validity, reliability, difficulty and discriminating index. So every public examination items are expected to be error free and measure what it purports to measure in the examinee. These are the reasons to avoid items that can function differently and which can unfairly influence the examinee results. Thus any test parameter that discriminates between two or more subgroups tends to be a validity threat signal on the items because the test score would require different interpretation for each group.

A good test should be able to differentiate the brilliant students from the dull ones. This can only be realizable when carefully constructed tests are set, administered, marked and scored. Tests that are too difficult or too simple rarely make effective evaluation possible. Therefore, there is a need to assess and analyze the psychometric properties of the items of Junior Secondary School Certificate Mathematics Examination in Ekiti State.

One of the mostly used tools to assess students’ achievement is tests. These tools seems to be fraught  with validity and reliability problems, as  the  process  for  constructing  such  tests  seems not to be  followed often  or misunderstood, and this leads to significant measurement errors into the measurement process. When the measurement is poor, then there will be an inaccurate data-based inference, which in turn leads to bad decision- making. The merit of any examination depends on the psychometric properties which are validity, reliability, difficulty level and discriminating power functions.

Evidence from examination report have shown that students’ performance at Junior Secondary School Certificate Examination in the last five years have been inconsistent which may be due to the nature and psychometric properties of the test. The main concern of educational measurement is to deal with individual testees’ true knowledge without guessing and bias in any test items construction. The researcher observed that some test items may not cover the syllabus while some test items may be too difficult or simple thereby compromising its difficulty level. Also, some test items do not discriminate, that is, not separating between high and low ability students. Some test items do not have good distractors, thereby making the options not plausible.

Hence, there is need to assess Junior Secondary School Certificate Examination because it seems that items constructed for Junior Secondary School Certificate Examination have not been passing through required procedures for standardization such as validity, reliability or testing for the item bias before  administration of the items. Likewise it seems that Ministry of Education has not been carrying out the psychometric properties of their items. Over years now, despite the students’ performance in their Junior Secondary School Certificate Examination, it is not reflecting in their outcome at the Senior Secondary level. The placement of students at this level of education determines the educational careers and destines of the students. This prompted the researcher to conduct a research finding out psychometric properties of the items of Junior Secondary School Certificate Mathematics Examination in Ekiti State.

The study examined the psychometric properties of the multiple choice Mathematics items of Junior Secondary School Certificate Examination in Ekiti State. Specifically, the study examined:

  1. the validity and reliability of the items of Junior Secondary School Certificate Mathematics Examination;
  2. the difficulty index of the items; and
  3. the discriminating power of the items.

METHODOLOGY

The design used in this study was ex-post facto-design. This design was primarily concern with non-experimental research. The researcher does not have direct control on the independent variables, since the manifestation had occurred and the analysis would be based on event that had occurred in the past. Hence the pre-existing groups (antecedent groups) that are independent variable of correct responses to some mathematics test item were compared. This design was therefore relevant to this study because it permits analyses to be carryout on existing data. The data for this study were responses of the students to multiple-choice mathematics items of the junior secondary school certificate examination of Ekiti state.

The population comprised of all the Junior Secondary School III students in Ekiti state that wrote 2022/2023 Junior Secondary School Certificate Examinations of Ekiti State. The choice of 2022/2023 session Junior Secondary School Certificate Examinations of Ekiti State was not unconnected with the fact that there was no empirical evidence that the psychometric properties have been investigated.

The sample for the study consisted of 600 Junior Secondary School III students from public and private schools in Ekiti State, the sample was selected using multi stage sampling procedure. In the first stage two local government areas were selected from each of the senatorial district in Ekiti State through purposive sampling technique (where private and public school are located). In the second stage, four secondary schools were selected from each of the local government area chosen for the study using stratified random sampling technique (using location and school type as strata). Two private and two public schools, one from rural and one from urban, where the urban schools are those schools sited in the headquarters of the Local Government making a total of twenty for (24) schools. In stage three twenty students were selected through stratified sampling technique from each of the twenty four (24) Junior Secondary School III earlier selected using gender as strata.

The instrument used for this study was the Ekiti State Ministry of Education JSCE multiple-choice objective test items in Mathematics for 2022/2023 session consisting of fifty (50) items and the students’ Optical Mark Reading – OMR (answer sheets). The OMR consisted of two sections, section A and section B. The section A of the OMR elicited bio-data information while the section B of the OMR contained the options A to E which affords the students the opportunity to shade the Key to the fifty items of the Ekiti State Ministry of Education Mathematics JSCE multiple-choice objective test.

The items were believed to have been developed, moderated, validated, and used by Ekiti State Ministry of Education for JSSCE. The fifty multiple choice JSCE III mathematics items of 2022/2023 session were assumed to have been validated and moderated by the Ekiti State Ministry of Education before they were administered to the respondent

The items were believed to have been developed, moderated, validated, and used by Ekiti State Ministry of Education for JSSCE. The fifty multiple choice JSCE III mathematics items of 2022/2023 session are assumed that the reliability must have been ensured by the Ekiti State Ministry of Education before they were administered to the respondent.

The Ekiti State Ministry of Education Mathematics JSCE multiple-choice objective test had been administered on the students by the Ekiti State Ministry of Education and the students’ Optical Mark Reading – OMR (answer sheets) marked. The students’ Optical Mark Reading – OMR (answer sheets) were obtained from the Ekiti State Ministry of Education for data analysis.

The data collected were analyzed using difficulty index, and discriminating power formulae to answer research questions. The Kr-20 was used to determine the internal consistency of the instrument.

RESULTS

The validity of the instrument was carried out using concurrent validity, which basically involves a correlation between the test items with already existing and well established items.

Table 1: Pearson’s Correlation of the Junior Secondary School Certificate Mathematics Examination items and standardized test items

JSSCE Standardized (NECO)
JSSCE Pearson Correlation 1 .492**
Sig. (2-tailed) .000
Standardized (NECO) Pearson Correlation .492** 1
Sig. (2-tailed) .000
**. Correlation is significant at the 0.01 level and 0.05 level (2-tailed).

Table 1 shows a moderate and positive correlation coefficient of 0.492 between the test items and standardized test items, which was significant at p<0.05. This implies that the Junior Secondary School Certificate Mathematics Examination items were moderately valid.

Table 2: Reliability Analysis of the Junior Secondary School Certificate Mathematics Examination items

Mean Variance Std. Deviation N of Items Cronbach’s Alpha
25.166 24.542 4.954 47 0.780

The test item had a total of fifty items but only forty seven items were valid with a mean value of 25.166 and standard deviation of 4.954. Table 2 shows a high and positive reliability coefficient with Cronbach’s Alpha of 0.780. This implies that the Junior Secondary School Certificate Mathematics Examination items are reliable and internally consistent.

Table 3: Analysis of difficulty indices of Mathematics test items of the Junior Secondary School Certificate Mathematics Examination

Item Difficulty Index Difficulty Index in % Remark   Item Difficulty Index Difficulty Index in % Remark
1 1.00 100 Easy   26 1.00 100 Easy
2 1.00 100 Easy   27 0.29 29 Difficult
3 1.00 100 Easy   28 0.35 35 Difficult
4 0.37 37 Difficult   29 1.00 100 Easy
5 0.57 57 Moderately Difficult   30 1.00 100 Easy
6 1.00 100 Easy   31 1.00 100 Easy
7 0.27 27 Difficult   32 1.00 100 Easy
8 1.00 100 Easy   33 1.00 100 Easy
9 1.00 100 Easy   34 0.37 37 Difficult
10 0.23 23 Difficult   35 0.60 60 Moderately Difficult
11 0.48 48 Difficult   36 0.34 34 Difficult
12 0.17 18 Very Difficult   37 Bad Item
13 0.38 38 Difficult   38 0.13 13 Very Difficult
14 0.26 26 Difficult   39 1.00 100 Easy
15 1.00 100 Easy   40 0.17 17 Very Difficult
16 0.35 35 Difficult   41 0.33 33 Difficult
17 0.41 41 Difficult   42 0.44 44 Difficult
18 1.00 100 Easy   43 0.41 41 Difficult
19 0.34 34 Difficult   44 Bad Item
20 1.00 100 Easy   45 0.42 42 Difficult
21 0.28 28 Difficult   46 1.00 100 Easy
22 0.26 26 Difficult   47 0.29 29 Difficult
23 0.30 30 Difficult   48 0.46 46 Difficult
24 0.34 34 Difficult   49 Bad Item
25 0.41 41 Difficult   50 0.42 42 Difficult

Table 4: Summary of Classification of difficulty indices of Mathematics test items of the JSSCE examination

S/N CLASSIFICATION NO OF ITEMS PERCENTAGE
1 Very Difficult (0.00 – 0.20) 3 6.00
2 Difficult (0.21 – 0.50) 25 50.00
3 Moderately Difficult (0.51 – 0.80) 2 4.00
4 Easy (0.81 – 1.00) 17 34.00
5 Bad Items 3 6.00
  Total 50 100.00

Table 3 and 4 shows the difficulty indices of test items of the Junior Secondary School Certificate Mathematics Examination. The difficulty index in table 3 is the difficulty indices of test items of the Junior Secondary School Certificate Mathematics Examination while the difficulty index in percentage is the percentage of number of the sample that got the items correctly. The remark was based on classification done by Schreyer Institute Testing Centre (2017), where items with difficulty index of 0 – 0.20 are classified as very difficult, items with difficulty index of 0.21 – 0.50 are classified as difficult, items with difficulty index of 0.51 – 0.80 are classified as moderately difficult, and items with difficulty index of 0.81 – 1.00 are classified as easy.

Table 4 shows the summary of the difficulty indices of test items of the Junior Secondary School Certificate Mathematics Examination. Three items representing 6% are classified as very difficult items, 25 items representing 50% are classified as difficult items, 2 items representing 4% are classified as moderately difficult while 17 items representing 34% are classified as easy. Three items of the total 50 items representing 6% were bad items with no correct answer. The graph below further shows the summary of the difficulty indices of test items of the Junior Secondary School Certificate Mathematics Examination.

Figure 1: Pie chart showing the difficulty indices of test items of the Junior Secondary School Certificate Mathematics Examination

Table 5: Analysis of discriminating powers of Mathematics test items of the JSSC examination

Item Discriminating Powers Cronbach’s Alpha if item deleted Remark   Item Discriminating Powers Cronbach’s Alpha if item deleted Remark
1 0.000 0.780 Poor   26 0.000 0.780 Poor
2 0.000 0.780 Poor   27 0.362 0.769 Fair
3 0.000 0.780 Poor   28 0.281 0.754 Good
4 0.371 0.764 Good   29 0.000 0.780 Poor
5 0.379 0.769 Good   30 0.000 0.780 Poor
6 0.000 0.780 Poor   31 0.000 0.780 Poor
7 0.111 0.772 Poor   32 0.000 0.780 Poor
8 0.000 0.780 Poor   33 0.000 0.780 Poor
9 0.000 0.780 Poor   34 0.321 0.756 Good
10 0.361 0.769 Good   35 0.405 0.812 Excellent
11 0.348 0.770 Good   36 0.435 0.812 Excellent
12 0.114 0.779 Poor   37 Bad Item
13 0.381 0.773 Good   38 0.000 0.780 Poor
14 0.372 0.768 Poor   39 0.000 0.780 Poor
15 0.000 0.780 Poor   40 0.133 0.763 Poor
16 0.296 0.771 Fair   41 0.437 0.772 Excellent
17 0.284 0.762 Fair   42 0.464 0.771 Excellent
18 0.000 0.780 Poor   43 0.423 0.772 Excellent
19 0.345 0.754 Good   44 Bad Item
20 0.000 0.780 Poor   45 0.332 0.715 Good
21 0.095 0.775 Poor   46 0.000 0.780 Poor
22 0.211 0.764 Poor   47 0.364 0.767 Good
23 0.330 0.752 Good   48 0.277 0.717 Fair
24 0.409 0.762 Excellent   49 Bad Item
25 0.414 0.765 Excellent   50 0.373 0.715 Good

Table 6: Summary of Classification of discriminating powers of Mathematics test items of the Junior Secondary School Certificate Mathematics Examination

S/N CLASSIFICATION NO OF ITEMS PERCENTAGE
1 Poor (<0.250) 24 48.00
2 Fair (0.251 – 0.300) 4 8.00
3 Good (0.301 – 0.400) 12 24.00
4 Excellent (0.401 – 1.000) 7 14.00
5 Bad Items 3 6.00
  Total 50 100.00

Table 5 and 6 shows the discriminating powers of the Junior Secondary School Certificate Mathematics Examination. The discriminating power in table 5 is the discriminating indices of test items of the Junior Secondary School Certificate Mathematics Examination while the Cronbach’s Alpha if item deleted is the reliability of the test when the affected item is deleted. The remark was based on classification done by Schreyer Institute Testing Centre (2017), where items with discriminating index of 0 – 0.250 are classified as poor discriminating power, items with discriminating index of 0.251 – 0.300 are classified as fair discriminating power, items with discriminating index of 0.301 – 0.400 are classified as good discriminating power, and items with difficulty index of 0.401 – 1.000 are classified as excellent discriminating power.

Table 6 shows the summary of the discriminating power of test items of the Junior Secondary School Certificate Mathematics Examination. Twenty four items representing 48% are classified as items having poor discriminating power, 4 items representing 8% are classified as items with fair discriminating power, 12 items representing 24% are classified as items with good discriminating power while 7 items representing 14% are classified as items with excellent discriminating power. Three items of the total 50 items representing 6% were bad items with no correct answer. The graph below further shows the summary of the discriminating indices of test items of the Junior Secondary School Certificate Mathematics Examination.

Figure ii: Pie chart showing the discriminating power of test items of the Junior Secondary School Certificate Mathematics Examination

Table 7: Table of Decision of Mathematics test items of the Junior Secondary School Certificate Mathematics Examination

Item Difficulty Index Discriminating Powers Action   Item Difficulty Index Discriminating Powers Remark
1 1.00 0.000 Delete   26 1.00 0.000 Delete
2 1.00 0.000 Delete   27 0.29 0.362 Retain
3 1.00 0.000 Delete   28 0.35 0.281 Retain
4 0.37 0.371 Retain   29 1.00 0.000 Delete
5 0.57 0.379 Retain   30 1.00 0.000 Delete
6 1.00 0.000 Delete   31 1.00 0.000 Delete
7 0.27 0.111 Revise   32 1.00 0.000 Delete
8 1.00 0.000 Delete   33 1.00 0.000 Delete
9 1.00 0.000 Delete   34 0.37 0.321 Retain
10 0.23 0.361 Retain   35 0.60 0.405 Retain
11 0.48 0.348 Retain   36 0.34 0.435 Retain
12 0.17 0.114 Delete   37 Delete
13 0.38 0.381 Retain   38 0.13 0.000 Delete
14 0.26 0.372 Retain   39 1.00 0.000 Delete
15 1.00 0.000 Delete   40 0.17 0.133 Delete
16 0.35 0.296 Retain   41 0.33 0.437 Retain
17 0.41 0.284 Retain   42 0.44 0.464 Retain
18 1.00 0.000 Delete   43 0.41 0.423 Retain
19 0.34 0.345 Retain   44 Delete
20 1.00 0.000 Delete   45 0.42 0.332 Retain
21 0.28 0.095 Revise   46 1.00 0.000 Delete
22 0.26 0.211 Revise   47 0.29 0.364 Retain
23 0.30 0.330 Retain   48 0.46 0.277 Retain
24 0.34 0.409 Retain   49 Delete
25 0.41 0.414 Retain   50 0.42 0.373 Retain

Table 7 provides the action to be taken on each of the items in the test, the actions to be taken were based on the difficulty index and discriminating power of each of the items. Items with both acceptable difficulty index and discriminating power are retained; items with either acceptable difficulty index or discriminating power are revised while items with poor difficulty index and discriminating power are deleted.

Items 4, 5, 10, 11, 13, 14, 16, 17, 19, 23, 24, 25, 27, 28, 34, 35, 36, 41, 42, 43, 45, 47, 48 and 50 were accepted and retained. Items 7, 21, and 22 need to be revised and corrected while items 1, 2, 3, 6, 8, 9, 12, 15, 18, 20, 26, 29, 30, 31, 32, 33, 37, 38, 39, 40, 44, 46 and 49 were deleted. In all 24 items representing 48% of the total items were retained, 3 items representing 6% of the total items need to be revised while 23 items representing 46% of the total items were deleted. The pie chart below further shows the action taken on each of the items in the test.

Figure iii: Pie chart showing the decision taken on the test item

DISCUSSION

This study revealed that the test items of Junior Secondary School Certificate Mathematics Examination were moderately valid, reliable and internally consistent. This finding supports the findings of Joe-Kinanee and Orluwene (2017), Mozaffer and Farhan (2012) and Kolawole (2007) who concluded that the mathematics items of Junior Secondary School Certificate Examination (JSSCE), NECO and WAEC were valid and reliable.

The study further revealed that three items representing 6% are classified as very difficult items, 25 items representing 50% are classified as difficult items, 2 items representing 4% are classified as moderately difficult while 17 items representing 34% are classified as easy. Three items of the total 50 items representing 6% were bad items with no correct answer. This finding is in consonance with the findings of Joe-Kinanee and Orluwene (2017) who concluded that the Junior Secondary School Certificate Examination (JSSCE) for Mathematics vary in their difficulty index. However, this finding contradicted the psychometric properties of WAEC and NECO Mathematics as reported by Kolawole (2007) who concluded that all their items maintained the same difficulty level.

It was also revealed that twenty four items representing 48% are classified as items having poor discriminating power, 4 items representing 8% are classified as items with fair discriminating power, 12 items representing 24% are classified as items with good discriminating power while 7 items representing 14% are classified as items with excellent discriminating power. Three items of the total 50 items representing 6% were bad items with no correct answer. This finding contradicted the psychometric properties of WAEC and NECO Mathematics as reported by Kolawole (2007) who concluded that all their items maintained the same discriminating of power. However, Joe-Kinanee and Orluwene (2017) finding supported the present finding as they concluded that the Junior Secondary School Certificate Examination (JSSCE) for Mathematics vary in their discriminating power.

CONCLUSION

Based on the findings of the study, the items were moderately valid and internally consistent. However, the items maintained different difficulty level, and discriminating power.

RECOMMENDATION

Based on the findings of this study, it is recommended that JSCE multiple-choice objective test items in Mathematics be subjected to psychometric analysis before administering it on the students

REFERENCES

  1. Abedalaziz, N. (2010). A gender-related differential item functioning of mathematics test items. International Journal of Educational and Psychological Assessment, 5, 101-116.
  2. Adebule, S. O. (2004). Gender differences on a locally standardized anxiety rating scale in Mathematics for Nigeria Secondary School. Nigeria Journal of Counseling and Applied psychology 1(1) 22-29.
  3. Adebule, S. O. (2009). Reliability and levels of difficulty of objectives test items in mathematics achievement test: A study of ten senior secondary schools. In five local government areas of Akure, Ondo. Educational Research and Review. 4 (11), 585 – 587
  4. Adebule, S. O. and Oluwatayo J. A. (2011). Vocational Evaluation and Career Testing. Gold Prints Publisher, Lagos Nigeria
  5. Adedayo, O. (2017). Mathematics phobia, diagnosis and prescription. National Mathematical Centre, 1st Annual Lecture, Abuja.
  6. Adediwura, A. A. (2013). A Comparative Study of Item Response Theory and Generalized Linear Model Methods of detecting differential item functioning in dichotomous test. Research Journal in Organizational Psychology and Education Studies, 2(6), 308-316.
  7. Adedoyin, O. O. (2010). Using IRT approach to detect gender biased items in public Examinations. Educational research and reviews academic journals, 5(7), 385-399.
  8. Aiken, L. R. (2007). Intellectual variables of mathematics achievement. Research Journal of School Psychology, 9, 201-206.
  9. Ajeigbe, T. O. & Afolabi, E. R. I. (2014). Assessing unidimensionality and differential item functioning in qualifying examination for Senior Secondary School Students, Osun State, Nigeria. World Journal of Education, 4(4), 30-37.
  10. Alonge, M. F. (2004): Measurement and Evaluation of Education and Psychological testing, second edition, published and printed by Adebayo printing (Nig.) Ltd. Ado Ekiti, Nigeria
  11. Amajuoyi, I. J. (2015). Verification of differential item functioning (DIF) status of West African Senior School Certificate Examination (WASSCE) in Chemistry. Journal of Educational Foundations, 5(1), 165-175.
  12. Anigbo, L. C. (2006). Development and Standardization of Mathematics Achievement Test Batteries for Primary Four Pupils in Nigeria. Unpublished doctoral dissertation, University of Nigeria, Nsukka.
  13. Anita, J. K. (2013). A study of teacher characteristics and students’ academic achievement: Case of Biology subject in selected secondary schools in Nandi South District, Kenya. Indian Journal of Research, 2(3), 66-69.
  14. Ariyo, S.O. (2017). Diagnostic assessment and effect of after-school programmes on the mathematics learning outcomes of low achieving students in Oyo State secondary schools. A Ph.D Post field presented at a seminar of the International Centre for Educational Evaluation, Institute of Education, University of Ibadan.
  15. Cormier, D. C. (2012). Evaluating the influence of differential item functioning for race and gender on STAR Mathematics items. Retrieved from www.renlearn.com
  16. DeVellis, R. (2016). Scale development: Theory and applications. Thousand Okas, CA: Sage.
  17. Ebisine, S. S. (2013). Cultural Imperatives in Differential Item Functioning (DIF) in Mathematics. Academic Journal of Interdisciplinary Studies, 2(10), 85-92.
  18. Emaikwu, S.O. (2011). Evaluation of students’ ability in schools. A paper presented at a workshop on teaching practice on Friday, 29th July in the College of Agricultural and science Education, Federal University of Agriculture, Makurdi, Benue state.
  19. Emaikwu, S. O. (2012). Issues in test item bias in public examinations in Nigeria and implications for testing in Nigeria. International Journal of Academic Research in Progressive Education and Development, 1(1),175-186
  20. Essen, C. B., Ukofia, I. F., Bassey, B. A., & Idika, D. O. (2017). Bridging the gap in the current global initiative in validation process in psychometrics: Nigerian perspective. International Journal of Scientific Research in Education, 10(1), 1-11.
  21. Federal Republic of Nigeria (2013). National policy of education. Abuja: Federal Government press.
  22. Federal Republic of Nigeria (2014). National policy on education (Revised) Lagos: National Educational Research Council Press.
  23. Fraenkel, J.R &Wallen, N.E. (2003). How to Design and Evaluate Research in Education, 7th Edition, New York. The McGraw-Hill Companies, Inc.
  24. Gregory, S. (2011). Mathematics and deaf children. In Gregory S. knight; P. Mccracken, w; and Wason, L. (Eds), Issues in deaf education, 119-124, London, UNitked Kingdom David Fallton
  25. Gronlund, N & Linn, R. (2000).Measurement and Evaluation in Teaching (7thed.). upper saddle river, N. J: prentice Hall.
  26. Jandaghi, G.,& Shateria, F. (2008). Rate of validity, reliability and difficulty indices for teacher-designed exam questions in first year high school.International Journal of Human Sciences, 5(2).
  27. Joseph, E. U. (2012). Psycho-Academic variables and mathematics achievement of 9th grade students. Nigeria British Journal of Education, Society & Behavioural Science, 2(2), 174-83.
  28. Joshua, M.T. (2005). Test/item Bias in Psychological Testing: Evidence on Nigeria System. A Paper Presented at the Annual Conference of Nigerian Association of Educational Psychologists held at Ahmadu University Zaria from 24th – 28th.
  29. Kabasakal, D. B., Arsan, O., Gok, M. & Kelecioglu, H. (2014). Item bias analysis of the university entrance examination. Egitim ve Bilim, 36(161), 3.
  30. Kelly, A. (2007). Why girls don’t do Science. Ibadan: University Press.
  31. Kinanee, J. N. B. &Orluwene G. W. (2017): A Comparative Study on Item Characteristics of  2014 – 2016 Mathematics Objective Tests in Junior Secondary School Certificate Examination Questions in Rivers State International Journal of  Mathematics Trends and Technology (IJMTT) 52(8)
  32. Kolawole, E. B. (2010): Principles of Tests construction and Administration. Bolabay Publications. Lagos
  33. Kolawole, E. B. (2012). Gender issues and academic performance of senior secondary school students in Mathematics computation tasks in Ekiti State Nigeria. Pakistan  Journal of Social Sciences, 15 (1), 102 -111
  34. Kolawole, E. B. & Oginni, I. O. (2009). Effectiveness of Laboratory Method of Teaching on Students Performance Senior Secondary School Mathematics.  The Journal of the Mathematical Association of Nigeria (ABACUS) 34(1), 120-125.
  35. Kolawole, E. B. & Olofin, S. O. (2018). Effects of Goal Setting Skill and Peer Modelling Strategies on Academic Performance of Ekiti State Students in Mathematics. In Book of Reading of Prof. Onwuamanam, Ado – Ekiti: University Press, 293 – 303
  36. Kolawole, E. B. & Oluwatayo, J. A (2004). Mathematics for everyday living. Implication for secondary school. Journal of Mathematics Association of Nigeria, 47-57
  37. Kolawole, E. B. & Popoola A. A. (2009). Effect of using problem solving method in the teaching of senior secondary school mathematics on students’ academic performance. International journal of Mathematics Education, 16 (2), 212 – 231
  38. McMillan, J. H., Schuma, O. & Chen, A. (2006). Fundamental assessment principles for teachers and school administrators. Practical Assessment, Research & Evaluation, 7(8).
  39. Messick, S. J. (2013). Assessment in higher education: Issues of access, quality, student development and public policy. New York, NY: Routledge.
  40. Mislevy, R. J., (2007). What can We Learn from International Assessments? Educational Evaluation and Policy Analysis.
  41. National Council for Curriculum Assessment (2015). Discussion Paper on International Trends in Mathematics. A Paper Published by the Government of Ireland.
  42. Nkpone, H.L. (2001), Validation of physics achievement Test, Unpublished Ph.D Thesis Department of Education, University of Nigeria Nsukka
  43. Nworgu, B. G. (2011). Differential item functioning: A critical issue in regional quality assurance. Paper presented in NAERA conference.
  44. Obinne, A. D. & Amali, A. O. (2014). Differential Item Functioning: The Implication for Educational Testing in Nigeria. International Review of Social Sciences and Humanities, 7 (1), 52-65
  45. Ogbebor, U.C & Onuka A.O.U. (2013). Differential item functioning as an item bias indicator. Journal of International Education Research, 367-373. http://www.interesiials.org/ER
  46. Ojerinde D. (1986). Educational Tests and Measurement. Ibadan, Nigeria. Codat Audo-Visual Services.
  47. Olofin, S. O. (2019). Effects of Kolawole’s problem-solving teaching strategy and teachers’ characteristics on the academic performance of senior secondary school students in Mathematics in Nigeria. Unpublished Ph.D Thesis, Department of Science Education, Ekiti State University
  48. Olofin, S.O. and Kolawole, E.B. (2020). Effect of Kolawole’s Problem-Solving Teaching Strategy on the Academic Performance of Secondary School Students in Mathematics in Nigeria. Advances in Social Sciences Research Journal, 7(2) 68-77. DoI:10.14738/assrj.72.7749.
  49. Olufemi, A. S. and Oluseyi, A. O. (2015) Differential Item Functioning of Senior Secondary School Uniform Promotion English Language Multiple Choice Examination Questions in Ekiti State, International Advanced Journal of Teaching and Learning. 1(2), 1-6
  50. Oluwatayo, J.A (2007): Continuous assessment scores as predictors of students grades in senior school certificate examination chemistry. Journal of research in Educational International Research and Development Institute. 4(2), 81-84.
  51. Opie, E.O. (2005). Educational testing in West Africa, Lagos: Premier Press and Publishers.
  52. Oragwam, E.O. (2004). Development and standardization of a national consciousness scale for federal unity secondary schools in Nigeria, Unpublished Ph.D Thesis Department of Science Education, University of Nigeria, Nsukka.
  53. Osadebe, P. U. (2013), Evaluation Techniques, Journal of Educational Research and Development 12 (1), 56 – 63
  54. Oso, C.B. (2015). Continuous assessment as a predictor of academic performance in school certificate examinations in Ekiti State.Unpublished.M.ed. Thesis, University of Ado-Ekiti; Nigeria.
  55. Pei, L. K., & Li, J. (2010). Effects of unequal ability variances on the performance of logistic regression, Mantel-Haenszel, SIBTEST IRT, and IRT likelihood ratio for DIF detection. Applied Psychological Measurement, 34(6), 453-456
  56. Popham, W. J. (2008). All about Assessment: A misunderstood grail. Educational Leadership, 66(1), 82-83.
  57. Raju, N.S. (1988). The area between two item characteristic curves. Psychometrika, 53, 495-502.
  58. Taylor, C. S., & Lee, Y. (2012). Gender DIF in reading and mathematics tests with mixed item formats. Applied Measurement in Education, 25(3), 246-280
  59. Thissen, D., & Wainer, H. (Eds.). (2011). Test scoring. New York, NY: Routledge.
  60. Ubi, I. O., Joshua, M. T., & Umoinyang, I. E. (2012) Assessment of dimensionality of mathematics tests of university matriculation examination in Nigeria: Implications for regional development. Journal of Educational Assessment in Africa, 7, 122- 130
  61. Wang, C. (2010). Differential item functioning analysis of biology test of the College Entrance Examination Taiwan (Unpublished doctoral dissertation). University of Illinois, Urbana-champaign

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

0 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER