International Journal of Research and Scientific Innovation (IJRSI)

Submission Deadline-23rd December 2024
Last Issue of 2024 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th January 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th December 2024
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Effects of Computer-Based Test (CBT) and Paper-And-Pencil Test (PPT) on Academic Achievement and Test Anxiety of Tertiary Institution Students in Educational Research in Delta State

  • Abanobi, C. C. (Ph.D)
  • Nwaozor, Christopher Zenoyi
  • Okonye, Clementina Obiageri
  • 24-43
  • Jan 27, 2024
  • Education

Effects of Computer-Based Test (CBT) and Paper-And-Pencil Test (PPT) on Academic Achievement and Test Anxiety of Tertiary Institution Students in Educational Research in Delta State

Abanobi, C. C. (Ph.D); Nwaozor, Christopher Zenoyi & Okonye, Clementina Obiageri

Department of Educational Psychology, F.C.E (T), Asaba Delta State

DOI: https://doi.org/10.51244/IJRSI.2024.1101003

Received: 06 December 2023; Revised: 21 December 2023; Accepted: 25 December 2023; Published: 26 January 2024

ABSTRACT

This study examined effects of Computer-Based Test (CBT) and Paper and Pencil Test (PPT) on academic performance and test anxiety of tertiary institution students in educational research in Delta State. Six research questions guided this study. Eight null hypotheses were formulated and tested at .05 level of significance. The study utilized pretest-posttest non-randomized control group design involving experimental and control groups. The population of this study comprised final year students who offered Educational research in all tertiary institutions in Delta State. The sample consisted 113 final year students who offered Educational research in tertiary institution in Delta State. Educational Research Achievement Test (ERAT) and Test Anxiety Inventory (TAI) were used as instruments for data collection. The instrument were validated by experts in Educational Measurement and Evaluation. The reliability coefficient of ERAT was 0.86. Mean statistics was used to analyze the research questions while analysis of covariance was used to test the null hypotheses at .05 level of significance. The findings revealed among others that the mean achievement scores of students exposed to PPT is higher than those exposed to CBT and the difference in their mean achievement scores was significant. The mean achievement scores of male students exposed to CBT is slightly higher than that of female students exposed to CBT but the difference in their mean achievement scores was not significant. Students exposed to PPT were more test anxious than those exposed to CBT yet the difference in their mean test anxiety scores was not significant. The interaction effect between gender and test mode with respect to either achievement or test anxiety was not significant. Based on the findings, the study recommended among others that tertiary institutions authorities and other stake holders in education should adopt both CBT and PPT as forms of students’ assessment in various tertiary institutions’ examinations. The government should provide tertiary institutions in the country with adequate computers and internet facilities to make students have enough CBT practical sessions.

Keywords: Computer-Based Test, Paper-and-Pencil Test, Academic Achievement, Test Anxiety, Students

INTRODUCTION

Assessment is a fundamental activity in the learning process because it is not only used in obtaining information on learners’ knowledge, understanding, abilities and skills but also it can be used to determine the learning outcome itself, advancing the learning procedure through appropriate feedback mechanisms. Assessment of students’ academic achievement can be done through the use of paper-and-pencil test or computer-based test.

Paper-and-Pencil Test (PPT) is a method in which students are assessed using paper and pencil. PPT is a written exam (with pen or pencil and paper) as opposed to an exam taken electronically via computer. Students are expected to read the assessment on paper and answer a given set of questions at the desired performance level using paper and a pencil (CTB/McGraw-Hill, 2011). Therefore, PPT generally refers to tests in which questions are presented on a paper, and test takers respond by writing. Advantages of PPT among others include its portability and can be used in any setting. This means that PPT can be used in a rural, semi-urban or urban area where there is electricity or no electricity as opposed to a test administered electronically. Additionally, there is nothing such as database crashes in PPT because the students’ responses to the questions are made in writing and documented and therefore, could not be lost as compared to electronic tests. PPT sometimes makes it easier for testees to think and gives them a sense of purpose when writing tests (Best Answer, n.d).

However, PPT has numerous limitations, for example, Sanni and Mohammad (2015) extensively noted that PPT is characterized by various forms of examination malpractices such as bringing in unauthorized materials, writing on currency note and identity cards, spying of other candidates in examination hall, substitution of answer sheets and change of examination scores or grades. Similarly, Alabi, Issa and Oyekunle (2012) asserted that  PPT in external examinations has many problems such as tedious processes as the examination is conducted at various and distant centres simultaneously and marked manually; high risks of accidents during travels by both the staff involved and the prospective students for the paper examination; cost of conduct of the examination on the part of the examination bodies including honoraria for invigilators, coordinators, markers, collators and other allied staff; subjective scoring and plausible manipulation of results; late release of results, missing scripts and examination malpractices.

Apart from PPT, alternatively, students can be assessed through the use of modern computers as Computer-Based Test (CBT). This is one of the recent ‘innovative’ approaches in the field of education and assessment under the influence of modern technology. CBT is a method of administering tests in which the responses are electronically recorded, assessed, or both. Sorana-Daniela and Lorentz (2007) defined CBT as tests or assessments that are administered by computer in either stand-alone or dedicated network form or by other technology devices linked to the internet or World Wide Web. CBT has many advantages of CBT have been extensively documented and demonstrated in literature as: it allows educators to collect data on students’ testing strategies, intermediate progress, amount of time spent on each question, and thought processes, in addition to their final answers. This information is based on analyses of times and sequences in data records that track students’ path through each task, their choices of which materials to access, and decisions about when to begin responding to items (Bridgeman, 2009; Buško, 2009; Csapó, Ainley, Bennett, Latour, & Law, 2010; Kozma, 2009; Martin, 2009; Thompson & Weiss, 2009; Tucker, 2009).

Despite the many advantages of CBT, it does not mean that CBTs are intrinsically better than PPTs (John, Cynthia, Judith &Tim, 2002). Nevertheless, CBT also has some drawbacks, for example, examinees need computer literacy in order to eliminate the mode effect on computer-based testing (Alderson, 2000). CBT may not be successfully administered without electricity especially in rural areas. Additionally, some of the students may get anxious when tests are presented on a computer. Open ended questions are not presented in computerized formats because these kinds of questions are usually scored by human, therefore, human interaction doesn’t exist in CBT (Brown, 2003). Also, computer crashes are more difficult to resolve than broken pencils.

CBT despite its drawbacks is now gaining popularity because of benefits accruable from it. This has made some developed countries of the world to move from the traditional test delivery mode to CBT. Nigeria is not left out as some tertiary institutions and have started using CBT to conduct their Post Unified Tertiary Matriculation Examination (PUTME). Also, some tertiary institutions in Nigeria are now using CBT for their internal examinations, for example, Nnamdi Azikiwe University has used CBT for two semesters now for General Studies (GS) examinations. This is because CBT provides powerful tools to meet the new challenges of designing and implementing assessments method that go beyond the PPT and facilitate to record a broader repertoire of cognitive skills and knowledge (Mubashrah, Tariq & Shami, 2012).

Research findings from observations are inconclusive to support the fact that there are no differences between the scores obtained via CBT or PPT (Alabi, Issa & Oyekunle, 2012). Many research works have been conducted to evaluate the comparability of CBT and PPT. Some studies revealed that there is a significant difference between the two testing modes on test scores (e.g. Scheuermann & Björnsson, 2009; Choi, Kim, & Boo, 2003), while other studies reported opposite or inconsistent results (Al-Amri, 2009). Also, research findings on the preference of CBT or PPT by various stakeholders in the field of education and other fields of study have been quite varied in the literature. This has been shown in a study by Lim, Ong, Wilder-Smith, and Seet (2006) on medical students’ attitude about CBT Vs PPT testing in Singapore, through an online survey. The findings showed that higher percentage of the students used in the study preferred CBT to PPT.  In this same vein, Clariana and Wallace (2002) found out that CBT delivery impacted positively on students’ scores as compared to PPT. The study also found that the CBT group out-performed the PPT group. On the contrary, other studies (Dermo & Eyre, 2008; George, 2011) carried out on CBT and PPT have opposite submission, the results showed that students believed the PPT enhanced their performance while CBT had a negative effect, and other varied results. All these above studies were done in oversea countries.

Much has also not been said in research reports about effects of CBT and PPT on test anxiety and academic achievement in Nigeria. Test anxiety is an intense fear of performing poorly on assessments. It is characterized by feelings of nervousness and discomfort paired with cognitive difficulties (Columbus, 2008). Akman-Yesilel (2012) submitted that anxiety is a term used for several disorders that cause nervousness, fear, apprehension and worrying. It results to high levels of stress and apprehension during testing/evaluative situations that significantly interfere with performance, emotional and behavioral well-being, and attitudes toward school (Cizek & Burg, 2006; Huberty, 2009).

In educational setting, test anxiety is common where the demands from a testing situation can incite a fear of failure, threat to self-esteem and worry over how the performance will be judged by others (Putwain, 2008). According to Harris and Coy (2003), one of the most threatening events that cause anxiety in students today is testing. Similarly, Segool (2009) observed that test anxiety effects students’ test performance. Corroborating the above, Cassady cited in Akinlele and Adeaga (2014) reported that between 25% and 40% of students experience test anxiety. This also significantly interferes with their performance, emotional and behavioural well-being, and attitudes toward school (Huberty, 2009). Usually, students with disabilities tend to have higher rates of test anxiety (Whitaker Sena, Lowe, & Lee, 2007; Woods, Parkinson, & Lewis, 2010). The female students have also been found to be more test anxious than their male counterparts (Cizek & Burg, 2006), although, other studies may have a different submissions. The research reports of test mode effect on students’ test anxiety are inconsistent. This implies that there are conflicting reports about the effects of CBT and PPT on test anxiety. A few studies have examined the effects of CBT or PPT on students’ test anxiety,  results of these studies seem inconsistent, providing no support that CBTs or PPTs will induce additional anxiety or impact performance levels positively (Cassady & Cridley, 2005; Stowell & Bennett, 2010). Some studies reported increased test anxiety amongst students unfamiliar with use of computer (Erle, Benjamin, Einar & Raymond, 2006).

Revuleta, Ximenez and Olea (2003); Schult and McIntosh (2004) reported no correlation between anxiety levels of students who take a PPT and those who take CBT. However, a study by Stowell and Bennett (2010) found some correlation between the two test types and anxiety.  They found that students with high anxiety in the classroom had less anxiety when taking their exams online. Students with low classroom anxiety had more anxiety taking an on online exam.  They also found the relationship between test performance and test anxiety was stronger for the classroom setting. Research reports of the effect of demographic attributes on students’ CBT and PPT performance are not consistent. For example, some studies indicated that gender was not related to performance difference between CBT and PPT (Alexander, Bartlett, Truell & Ouwenga, 2001; Clariana & Wallance, 2002), while other studies suggested that gender is associated with the test delivery mode (Gallagher, Bridgeman, & Calahan, 2002; Leeson, 2006), with male examinees benefiting from the CBT format more than female examinees who showed slightly poorer performance on CBTs. The opposite is the case of other studies’ results which have shown a better performance and high regard to CBT by female students in the studies done by Ayo, et al. (2007), Bebetos and Antonio (2008) as well as Kadel (2005).  Contrary to the above findings, Florida Department of Education (2006); Paek (2005); Poggio et al. (2005); Sim and Horton (2005) found that, regardless of gender, students perform at similar levels when they take tests on computers versus on paper.

Male and female students’ academic performance in Educational research is poor as observed in various tertiary institutions in Delta State. Educational research is one of the courses offered at 300 level or 400 level depending on tertiary institution involved. It is one of prerequisite for students to offer and subsequently do the practical aspect in form of research project writing. Students are required to carry out a research as a part fulfillment of their degree. The course research provides an avenue for students to gain in-depth learning experience to conduct an investigation in a topic related to their specific field of study. The students are exposed to how and why research is developed to help solve problems that currently do not have answers.

Educational research has been defined as a scientific process of solving problems in education and related fields of study. It is a way of examining critically various aspects of one’s daily work; understanding and formulating guiding principles that govern a procedure; and developing and, testing new theories that contribute to the advancement of one’s practice and profession. It is a habit of questioning what to do, and systematically employing scientific means to explain and find answers to one’s perception, with a view to instituting appropriate changes for a more effective professional service (kumar 2016). Educational research is a systematic and scholarly application of the scientific method, interpreted in its broad sense to solution of educational problems (Osuala in Ajayi & Abanobi, 2017). This implies that research is a systematic process of collecting and analyzing information or data to increase understanding of a phenomenon under study. It is the function of the researcher to contribute to the understanding of the phenomenon and to communicate that understanding to others (Mohan, 2011). It is important for the advancement of knowledge, increasing understanding of educational phenomena, providing solutions to educational problems, improvement of educational practice and bringing about overall development and progress.

Educational research as a course aims at introducing students to the basic concepts used in research and to scientific, social research methods and their approach. It includes discussions on sampling technique, research designs, and techniques of analysis. It also helps student to develop an understanding of the basics of the research process, to develop an understanding of various research designs and techniques, to identify sources of information for literature review and data collection, to develop an understanding of the ethical dimensions of conducting applied research and to appreciate the components of scholarly writing and evaluate its quality. Observations have revealed that students face some challenges in educational research and such leads to poor academic performance (Odunze, 2019; Ikeoji & Onyekwuluje, 2017; Siddique, Zulfiqar, & Khalid, 2020). Furthermore, globally, the completion rate for undergraduate and graduate students ranges from poor to abysmal (Rogers & Fleck, 2014; Lehtinen & Rui, as cited in Sunzuma, Zekewa & Bhukuvhani, 2012; Komba, 2016). Educational research by its nature is a challenging task for any learner irrespective of the level of study but even more so for undergraduate students who for the most parts are first time researchers. This is because, it requires rigorous efforts in its understanding and further carry out a study to solve a problem or understand a phenomenon.

There is a wide knowledge gap as some students over the years have shown a lack of positive learning outcome portrayed in their inability to display an understanding of the educational research in various tertiary institutions. The trend is particularly worrisome when viewed against the assumption that students much offer educational research as a course before graduation in tertiary institutions. The above situation is worrisome because, it has shown that the academic achievement of both male and female students in Educational research is not encouraging. One may be forced to ask the question whether the students’ poor academic performance in educational research is occasioned by the use of PPT. With search to determine which of the test modes (PPT or CBT) can reduce students’ test anxiety and enhance academic performance in educational research is of concern to the present study.

Statement of the Problem

The goal of every educational setting is to monitor students’ academic achievement by using the best test mode to guarantee excellent achievement in schools. Presently, various developed countries across the globe are migrating from the traditional test mode toward the use of CBT to assess students’ academic achievement. CBT is not just an alternative method for delivering examinations, it represents an important qualitative shift away from traditional assessment because of several benefits it offers. Nigeria as a country is not left out in this as various institutions and examination bodies are migrating from the use PPT toward the use of CBT for students’ assessment. Introduction of CBT as a method of assessment in Nigeria may likely raise students’ apprehension as to increase or reduce students’ test anxiety which definitely will affect their academic achievement.

Students’ poor achievement in educational research over the years in various tertiary institutions has attracted a lot of concern. A triangulation of researchers (Odunze, 2019; Ikeoji & Onyekwuluje, 2017; Siddique, Zulfiqar, & Khalid, 2020; Rogers & Fleck, 2014; Lehtinen and Rui, as cited in Sunzuma, Zekewa & Bhukuvhani, 2012; Komba, 2016) have observed that students’ academic performance in educational research is poor in various in tertiary institutions. This may be as a result of the poor assessment methods used in tertiary institutions. PPT which is the common traditional assessment mode may affect students’ academic achievement and test anxiety as it plays a significant role in academic settings and may prevent some students from realizing their fullest academic potential (Chapell et al., 2005). To this end, appropriate assessment methods need to be used in educational research in order to reduce students’ test anxiety and guarantee better students’ academic achievement.

The results of various studies have not provided an answer to whether CBT or PPT reduces or increases students’ test anxiety as well as students’ academic achievement. This may cause one to ask the question- which of these test modes (CBT or PPT) can effectively impact students’ test anxiety and academic achievement in a positive or desired direction? Based on the inconsistency of research results and the many still unanswered questions surrounding the assessment using PPT and CBT, this study investigates effects of Computer-Based Test (CBT) and Paper and Pencil Test (PPT) on academic performance of students in educational research in tertiary institutions in Delta State.

Research Questions

The following research questions were raised to guide this study:

  1. What are the mean achievement scores of students exposed to CBT and PPT in Educational research?
  2. What are the mean achievement scores of male and female students exposed to CBT in Educational research?
  3. What are the mean achievement scores of male and female students exposed to PPT in Educational research?
  4. What are the mean test anxiety scores of students exposed to CBT and PPT in Educational research?
  5. What are the mean test anxiety scores of male and female students exposed to CBT in Educational research?
  6. What are the mean test anxiety scores of male and female students exposed to PPT in Educational research?

Hypotheses

The following formulated null hypotheses were tested at .05 level of significant in the present study:

  1. The difference in the mean achievement scores of students exposed to CBT and PPT in Educational research is not be significant
  2. The difference in the mean achievement scores of male and female students exposed to CBT in Educational research is not be significant
  3. The difference in the mean achievement scores of male and female students exposed to PPT in Educational research is not be significant
  4. The interaction effect between gender and test mode with respect to achievement is not be significant
  5. The difference in the mean test anxiety scores of students exposed to CBT and PPT in Educational research is not be significant
  6. The difference in the mean test anxiety scores of male and female students exposed to CBT in Educational research is not be significant
  7. The difference in the mean test anxiety scores of male and female students exposed to PPT in Educational research is not be significant
  8. The interaction effect between gender and test mode with respect to test anxiety will not be significant.

METHOD

The design of this study was quasi-experimental design. It utilized the pretest-posttest non-randomized control group design involving two groups – the experimental group and control group. It is a quasi-experimental study because participants were not be randomly assigned to groups. The Table 1 shows the design that was used for this study:

Table 1: Design of the Study

Group Pre-test Treatment Post-test
Experimental Group O1 X O2
Control Group O1 O2

 Symbols

X         – Treatment

O1        – Pre-test

O2        – Post-test

The population of this study comprised all final year students who offer educational research in various tertiary institutions in Delta State. The reason behind the selection of tertiary institutions is to ensure that male and female students were adequately considered in the present study.

The sample of this study comprised 113 final year students who offered educational research in various tertiary institutions in Delta State. This sample was made up of male and female final year students who offer Educational research in the selected tertiary institutions. Purposive sampling technique were used to sample two tertiary institutions in Delta State. There should be a functional and well equipped Information and Communication Technology (ICT) in the selected three tertiary institutions. This helped to facilitate the successful completion of the study. Two final year classes were selected, each class from the two selected tertiary institutions.

The researcher used simple random sampling technique to assign the selected tertiary institutions to experimental group and control group respectively through the use of balloting. The first school selected through balloting was experimental group while the second school selected was control group. The number of male and female final year students in experimental and control group was determined during data collection.

Two instruments were used for data collection in this study. They are Educational Research Achievement Test (ERAT) and the Test Anxiety Inventory (TAI).

  1. Educational Research Achievement Test (ERAT)

The ERAT is a 40-item, 4-option multiple choice objective test on course contents of educational research. The questions covered all levels of objectives in the cognitive domain. The instrument was constructed by the researchers who is a Professional in Educational research. The ERAT was used to collect data on students’ academic achievement.

  1. Test Anxiety Inventory (TAI)

TAI was developed by Spielberger in 1980 and validated in Nigeria by Oladimeji (2005). It measures anxiety proneness to examinations and evaluative situations. The inventory was designed for secondary school students and undergraduates, and consists of 20 items that assess three components of test anxiety namely: worry (W), Emotionality (E) and Total anxiety score (T). W – Worry refers to excessive preoccupation and concern about the outcome of a test, especially the consequences of failure. E – Emotionality refers to an individual’s behavioural reactions and feelings aroused by test situation. T – Total anxiety score is the sum of W and E. It refers to total cognitive, affective and behavioural reactions to test/examination situations. Responses to the items vary from “almost never” to “almost always” with a minimum score of 20 and maximum of 80. TAI was used to collect data on the students’ test anxiety in the present study.

Face and content validation was carried out for the ERAT. Two copies of ERAT were given to two experts in Educational Measurement and Evaluation. These experts were requested to vet the items in terms of clarity of words, appropriateness to the class levels and plausibility of distracters in order to ascertain the face and content validity of the ERAT. The corrections and suggestions made were effected in the final version of ERAT.  The TAI used had been validated. Oladimeji (2005) said that different forms of validation such as concurrent, discriminate, constraint and convergent validity were determined when it was used on Nigerian students.

The reliability coefficient of ERAT was determined using the Kuder Richardson formula 21. The 40-items of ERAT were administered on 20 final year students who offered Educational research in Nnamdi Azikiwe University, Awka. The reliability test yielded a coefficient of 0.86 which showed that the instrument was fit for this study.  Oladimeji (2005) noted that the Pearson Product Moment statistical technique was used to correlate the test-retest scores under the non-examination condition. The coefficients of reliability obtained were: 0.73, 0.79, and 0.56 for W, E and T respectively, significant at p<0.01, one tailed, df-98

For the Test Anxiety Inventory (TAI), the  items were scored with the four-point  rating scale ranging from 1 for “almost never” to 4 for “almost always”. These scores were summated to obtain test anxiety score. The Educational research Achievement Test (ERAT) contained 40 questions. Questions carry equal marks and any correct answer was scored one while incorrect answer was scored zero.

The instruments for data collection in this study (ERAT and TAI) were administered to the students in experimental group and control group. The ERAT in PPT mode and TAI were administered as pre-test to the experimental group and control group. Data obtained from this will serve as pre-test scores. ERAT in CBT mode was administered to experimental group as post-test while ERAT in PPT mode was administered to the control group as post-tests. After the achievement test, the TAI was administered to the two groups as post-test to determine the students’ test anxiousness. The data collected were analyzed using mean to answer the research questions. The Hypotheses were tested at .05 level of significance using Analysis of Covariance (ANCOVA). The analyses were done using Statistical Package in Social Sciences (SPSS). The decision rule for test of null hypothesis was based on the p-value. The hypothesis was accepted if the p-value is greater than 0.05, otherwise such is rejected.

RESULTS

Research Question 1: What are the mean achievement scores of students exposed to CBT and PPT in Educational research?

Table 2: Mean Achievement Scores of Students Exposed to CBT and PPT in Educational Research (N=113)

Group N Pre-test Post-test
    x x
Experimental (CBT) 50 21.7 20.6
Control (PPT) 63 21.9 22.1

The analyses on Table 3 shows the pre-test and post-test mean achievement scores of students exposed to CBT and PPT in Educational research. The analyses further revealed that mean achievement scores of students exposed to PPT is higher than the students exposed to CBT.

Research Question 2: What are the mean achievement scores of male and female students exposed to CBT in Educational research?

Table 4: Mean Achievement Scores of Male and Female Students Exposed to CBT in Educational Research (N=50)

Experimental Group (CBT) N Pre-test x Post-test x
Male 20 22.2 20.9
Female 30 21.2 20.5

Table 4 shows the pre-test and post-test mean achievement scores of male and female students exposed to CBT in Educational research. Furthermore, the analyses revealed that mean achievement scores of male students exposed to CBT is slightly higher than that of female students exposed to CBT.

Research Question 3: What are the mean achievement scores of male and female students exposed to PPT in Educational research?

Table 5: Mean Achievement Scores of Male and Female Students Exposed To PPT in Educational Research (N=63)

Control Group (PPT) N Pre-test x Post-test x
Male 24 23.9 22.1
Female 39 18.5 20.3

The information on Table 5 shows the pre-test and post-test mean achievement scores of male and female students exposed to PPT in Educational research. In addition, the analyses revealed that mean achievement scores of male students exposed to PPT is higher than their female counterparts exposed to PPT.

Research Question 4: What are the mean test anxiety scores of students exposed to CBT and PPT in Educational research?

Table 6: Mean Test Anxiety Scores of Students Exposed to CBT and PPT in Educational Research (N=113)

Group N Pre-test x Post-test x
Experimental (CBT) 50 33.6 42.9
Control (PPT) 63 40.2 47.4

The data analyzed on Table 6 shows the pre-test and post-test mean test anxiety scores of students exposed to CBT and PPT in Educational research. Also, the analyses revealed that mean test anxiety scores of students exposed to PPT is higher than that of their counterparts exposed to CBT.

Research Question 5: What are the mean test anxiety scores of male and female students exposed to CBT in Educational research?

Table 7: Mean Test Anxiety Scores of Male and Female Students Exposed to CBT in Educational Research (N=50)

Experimental Group (CBT) N Pre-test x Post-test x
Male 20 47.1 45.3
Female 30 49.3 44.8

Analyses on Table 7 shows the pre-test and post-test mean test anxiety scores of male and female students exposed to CBT in Educational research. The analyses revealed further that mean test anxiety scores of female students exposed to CBT is higher than the male students exposed to same test mode in Educational research.

Research Question 6:

What are the mean test anxiety scores of male and female students exposed to PPT in Educational research?

Table 8: Mean Test Anxiety Scores of Male and Female Students Exposed to PPT in Educational Research (N=63)

Control Group (PPT) N Pre-test  x Post-test x
Male 24 41.5 39.2
Female 39 43.1 40.9

Information presented on Table 8 shows the pre-test and post-test mean test anxiety scores of male and female students exposed to PPT in Educational research. In addition, the analyses revealed that mean test anxiety scores of female students exposed to PPT is higher than their male counterparts exposed to same test mode in Educational research.

Hypothesis 1: The difference in the mean achievement scores of students exposed to CBT and PPT in Educational research will not be significant

Table 9: Tests of Between-Subjects Effects of Mean Achievement Scores of Students Exposed To CBT and PPT in Educational Research

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 13753.064a 2 4584.355 61.675 .000
Intercept 7662.354 1 7662.354 103.084 .000
Pretest 9136.566 1 9136.566 122.918 .000
Groups 947.886 1 947.886 12.752 .019
Error 2675.911 110 74.331
Total 117331.000 113
Corrected Total 16428.975 112

*p < 0.05

The analyses on Table 9 revealed that test mode effect on achievement was significant given that F(1,104) = 5.678, and p < 0.05 (.019 < 0.05). Therefore, the null hypothesis was rejected, thus, the difference in the mean achievement scores of students in CBT and PPT was significant.

Hypothesis 2: The difference in the mean achievement scores of male and female students exposed to CBT in Educational research will not be significant

Table 10: Tests of Between-Subjects Effects of Mean Achievement Scores of Male and Female Students Exposed to CBT in Educational Research

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 13753.064a 2 4584.355 61.675 .000
Intercept 7662.354 1 7662.354 103.084 .001
Pretest 21.668 1 21.668 .292 .000
Gender 947.886 1 947.886 12.752 .798
Error 2675.911 47 74.331
Total 117331.000 50
Corrected Total 16428.975 49

*p > 0.05

Table 10 analyses revealed that F(1,53) = .066, and p > 0.05 (.798 > 0.05), this implies that test mode effect on achievement was not significant. Therefore, the null hypothesis was not rejected, thus, the difference in the mean achievement scores of male and female students exposed to CBT was not significant.

Hypothesis 3: The difference in the mean achievement scores of male and female students exposed to PPT in Educational research will not be significant

Table 11: Test Between Subject Effects of Mean Achievement Scores of Male and Female Students Exposed to PPT in Educational Research

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 13835.721a 2 6917.861 55.387 .000
Intercept 1103469.605 1 1103469.605 8834.854 .000
Pretest 13786.005 1 13786.005 110.377 .846
Gender .030 1 .030 .000 .000
Error 18485.140 60 124.900
Total 1250925.000 63
Corrected Total 32320.861 62

*p < 0.05

Analyses of Table 11 shows that F(1,48) = 22.565, and p < 0.05 (.000 < 0.05). This revealed that test mode effect on achievement was significant. Consequently, the null hypothesis was rejected which implies that the difference in the mean achievement scores of male and female students exposed to PPT was significant.

Hypothesis 4: The interaction effect between gender and test mode with respect to achievement will not be significant

Table 12: Interaction Effect Between Gender and Test Mode with Respect to Achievement

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 13835.922a 4 3458.981 46.688 .000
Intercept 7707.542 1 7707.542 104.033 .000
Groups 7863.532 1 7863.532 106.139 .000
Gender 951.864 1 951.864 12.848 .001
Pretest 1.113 1 1.113 .015 .903
Groups * Gender 82.858 1 82.858 1.118 .298
Error 2593.053 100 74.087
Total 117331.000 113
Corrected Total 16428.975 112

p > 0.05

The analyses on Table 12 revealed that interaction effect between gender and test mode with respect to achievement was not significant given that F(1,102) = .015, and p > 0.05 (.904 > 0.05). As a result, the null hypothesis was not rejected.

Hypothesis 5: The difference in the mean test anxiety scores of students exposed to CBT and PPT in Educational research will not be significant

Table 13: Test of Between Subject Effects of Mean Test Anxiety Scores of Students Exposed to CBT and PPT in Educational Research 

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 173.169a 2 86.585 2.749 .000
Intercept 132888.390 1 132888.390 4218.814 .000
Pretest2 168.576 1 168.576 5.352 .000
Groups 8.465 1 8.465 .269 .911
Error 4661.851 110 31.499
Total 145944.000 113
Corrected Total 4835.020 112

*p > 0.05

Results on Table 13 shows that F(1,104) = .013, and p > 0.05 (.911 > 0.05), this implies that test mode effect on mean test anxiety scores of students exposed to CBT and PPT in Educational research was not significant. So, the null hypothesis was not rejected implying that the difference in the mean test anxiety scores of students in CBT and PPT was not significant.

Hypothesis 6: The difference in the mean test anxiety scores of male and female students exposed to CBT in Educational research will not be significant

Table 14: Test of Between Subject Effects of Mean Test Anxiety Scores of Male and Female Students Exposed to CBT in Educational Research

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 13835.721a 2 6917.861 55.387 .000
Intercept 1103469.605 1 1103469.605 8834.854 .000
Pretest2 13786.005 1 13786.005 110.377 .000
Gender .030 1 .030 .000 .038
Error 18485.140 47 124.900
Total 1250925.000 50
Corrected Total 32320.861 49

*p < 0.05

Analyses of Table 14 shows that F(1,53) = 4.513, and p < 0.05 (.038 < 0.05). This revealed that test mode effect on mean test anxiety scores of male and female students exposed to CBT in Educational research was significant. Thus, the null hypothesis was rejected which implies that the difference in the mean test anxiety scores of male and female students exposed to CBT was significant.

Hypothesis 7: The difference in the mean test anxiety scores of male and female students exposed to PPT in Educational research will not be significant

Table 15: Test of Between Subject Effects of Mean Test Anxiety Scores of Male and Female Students Exposed to PPT in Educational Research

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 757.411a 2 378.706 32.218 .002
Intercept 508.995 1 508.995 43.302 .000
Pretest2 706.561 1 706.561 60.109 .001
Gender 66.741 1 66.741 5.678 .067
Error 1222.477 60 11.755
Total 50691.000 63
Corrected Total 1979.888 62

p > 0.05

The result of Table 15 shows that test mode effect on mean test anxiety scores of male and female students exposed to PPT in Educational research was not significant given that F(1,48) = 3.501, and p > 0.05 (.067 > 0.05). Therefore, the null hypothesis was not rejected implying that the difference in the mean test anxiety scores of male and female students exposed to PPT was not significant.

Hypothesis 8:

The interaction effect between gender and test mode with respect to anxiety will not be significant

Table 16: Interaction Effect Between Gender and Test Mode With Respect To Anxiety

Source Type III Sum of Squares Df Mean Square F Sig.
Corrected Model 34.702a 4 8.676 434.712 .000
Intercept 10.934 1 10.934 547.879 .000
Groups .003 1 .003 .164 .896
Gender 32.858 1 32.858 1646.461 .006
Pretest2 7.7605 1 7.7605 .000 .000
Groups * Gender .020 1 .020 .997 .912
Error 2.914 100 .020
Total 391.000 113
Corrected Total 37.616 112

*p > 0.05

Data as presented on Table 16 shows that interaction effect between gender and test mode with respect to test anxiety was not significant based on that F(1,102) = .012, and p > 0.05 (.912 > 0.05). The null hypothesis was therefore not rejected.

DISCUSSION

Achievement Scores of Students on CBT and PPT in Educational Research

One of the findings of the study revealed that the mean achievement scores of students exposed to PPT is higher than the students exposed to CBT. Therefore, the difference in the mean achievement scores of students exposed to CBT and PPT was significant. Corroborating the above findings, Higgins, Russell, and Hoffmann (2005) in a study on comparison of Vermont students randomly assigned to complete a reading comprehension test on CBT or PPT found that students completing the test on paper received the highest mean score, followed by their counterparts using computer based test.

Achievement Scores of Male and Female Student on CBT and PPT in Educational Research

Another finding of the study revealed that the mean achievement scores of male students exposed to CBT is higher than female students exposed to CBT. Nevertheless, the result of hypothesis on Table 10 concludes that the difference in the mean achievement scores of male and female students exposed to CBT was not significant. Also, the finding of Table 5 shows that the mean achievement scores of male students exposed to PPT is higher than their female counterparts exposed to PPT. Thus, the difference in the mean achievement scores of male and female students exposed to PPT was significant.

In agreement with the above findings, Gallagher, Bridgeman and Calahan (2002) as well as Leeson (2006) found that male examinees performed better on the CBT format more than female examinees who showed slightly poorer performance on CBTs. More so, a number of studies have found that boys outperform girls when tested on the computer, while girls perform significantly better on paper-and-pencil tests (Csapó et al., 2009; Halldórsson et al., 2009; Higgins et al., 2005; Lee, 2009; Martin & Binkley, 2009; Sórenson & Andersen, 2009). Researchers have hypothesized several reasons for this finding. Some suggest that although gender gaps in volume of computer usage have closed rapidly over the last few years, boys are much more likely to play online games and use game-type software that are similar to the flash animations and video footage used with many computer-based test items.

Test Anxiety Scores of Students on CBT and PPT in Educational Research

The finding of this study revealed that students exposed to PPT have more test anxiety than their counterparts exposed to CBT. Even though, that the difference in the mean test anxiety scores of students in CBT and PPT was not significant. Correspondingly, Wang and Chuang’s (2002) conducted a study using junior high, high school, and college students. Measures of anxiety, test preference, adaptability of the test, and acceptance of test results all showed that students viewed the CBT with less anxiety and positive preference. Likewise, research conducted by Fritz and Marzeck cited by Gwen (2013) comparing two groups of junior high students, one group taking a P&P test and one group taking a CBT version of the same test, found lower rates of self-reported state test anxiety in the group taking the CBT version than students taking the PPT version. It is the general consensus that there is no significant difference/relationship between anxiety levels of students who take a PPT and those who take a CBT (Revuleta, Ximenez & Olea, 2003; Schult & McIntosh, 2004).

Test Anxiety Scores of Male and Female Students on CBT and PPT in Educational Research

Finally, the finding revealed that female students exposed to CBT have more test anxiety than the male students exposed to same test mode. Hence, there is a significant difference in the mean test anxiety scores of male and female students exposed to CBT. Also, the finding on Table 8 shows that the female students exposed to PPT are more test anxious than their male counterparts exposed to same test mode. However, there was no significant difference in the mean test anxiety scores of male and female students exposed to PPT.

Comparatively, Cizek and Burg (2006) found that the female students to be more test anxious than their male counterparts. Nadeem, Akhtar, Saira and Syeda (2012) in a study, used a sample size of 200 students selected by stratified Sampling. The researchers made three groups of male and female students each. In their research questionnaire (Otis self-administering test of mental ability) and anxiety measurement scale was selected as an instrument for the purpose of data collection. It is noteworthy to state that in their results the female students had more test anxiety compared to the male students.

RECOMMENDATIONS

The following recommendations were made;

  1. Tertiary institutions authorities and other stake holders in education should adopt both CBT and PPT as forms of students’ assessment in various tertiary institutions’ examinations.
  2. Government should provide tertiary institutions in the country with  adequate  computers  and  internet  facilities  to  make  students  have  enough CBT practical sessions. CBT training centers should as a matter of urgency be set up in various tertiary institutions in the country to train candidates on CBT before their examinations. This will ensure better academic achievement and reduce test anxiety among students in tertiary institutions
  3. Computers, for one reason or another, tend to break down or are prone to random faults. Therefore, the school authorities are advised to have a good backup system in place when using CBT for students’ assessment. All work must be saved to a removable drive to facilitate the transfer of the examination paper to the backup, whenever the need arises.

REFERENCES

  1. Abanobi, C. C. (2013). Psychometric properties of NABTEB educational research multiple-choice test items from 2005 to 2011.(Unpublished M.Ed Thesis), Nnamdi Azikiwe University, Awka, Nigeria.
  2. Ajayi, P. O. & Abanobi, C. C. (2017). Educational research: A fundamental guide. Asaba. Rupee-Com Publishers.
  3. Akman & Yesilel, D. B. (2012). Test anxiety in elt classes. Frontiers of Language and Teaching, 3, 24-31.
  4. Akinleke W. O., & Adeaga, T. M. (2014). Contributions of test anxiety, study habits and locus of control to academic performance. British Journal of Psychology Research, 2 (1), 14-24.
  5. Alabi, A.T., Issa, A. O. & Oyekunle, R. A. (2012). The use of computer based testing method for the conduct of examinations at the university of Ilorin. Ife Journal of Educational Leadership, Administration and Planning (IJELAP), 1(1), 226.
  6. Al-amri,  S.  (2009).Computer-based testing vs. paper-based testing: Establishing the comparability of reading tests through the evolution of a new comparability model in a Saudi EFL context. Thesis submitted for the degree of Doctor of Philosophy in Linguistics. University of Essex.
  7. Alderson, J. C. (2000). Technology in testing: The present and the future. System, 28(4), 593-603.
  8. Alexander, M., Bartlett, J., Truell, A., & Ouwenga,  K. (2001).  Testing in computer technology courses:  An investigation of equivalency in performance between online and paper-and-pencil methods. Journal of Career and Technical Education, 18(1), 69-77.
  9. Ali, N., Jussof, K., Ali, S., Mokhtar, N., Syafena, A., & Salamat, S. (2009). The factors influencing students’ performance at universiti Teknologi MARA Kedah, Malaysia. Management Science and Engineering, 3(4), 81-90.
  10. Alisa, E. S. (2014). The impact of assessment delivery method on student achievement in language arts. Published Ph.D thesis submitted to the Graduate Department and Faculty of the School of Education of Baker University.
  11. Ayo, C. K., Akinyemi, I .O., Adebiyi, A. A., & Ekong, U. O. (2007). The prospects of e-examination implementation in Nigeria. Turkish Online Journal of Distance Education, 8 (4), 125-134.
  12. Aziz, S., & Hassan, H. A. (2012). Study of computer anxiety of higher secondary students in Punjab. International Journal of Social Science & Education, 2(2), 264-273.
  13. Bebetos, C., & Antonio, S. (2008). Why use information and communication technology in schools? Some theoretical and practical issues. Journal of Information Technology for Teacher Education, 10(1&2), 7-1.
  14. Best Answer (nd). What are the advantages and disadvantages of paper and pencil? Retrieved from https://answers.yahoo.com/question/index?qid=20070518201951AAgpqYq
  15. Bridgeman, B. (2009). Experiences from large-scale computer-based testing in the USA. In F. Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  16. Brown, H. D. (2003). Language assessment: Principles and classroom practices. USA: Longman.
  17. Buško, V. (2009). Shifting from paper-and-pencil to computer-based testing: Requisites, challenges and consequences for testing outcomes. A Croatian perspective. In F. Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  18. Calaguas, G.M. (2011). Academic achievement and academic adjustment difficulties among college freshmen. Journal of Arts, Science and Commerce, 2(3), 72-76.
  19. Chapell,  M.S.,  Blanding,  Z.B.,  Takahashi,  M., Silverstein,  M.E.,  Newman,  B.,  Gubi,  A., & Mccann, N. (2005). Test anxiety and academic performance in undergraduate and graduate students. Journal of Educational Psychology, 97(2), 268-274.
  20. Choi I., Kim, K.., & Boo, J. (2003). Comparability of a paper-based language test and a computer-based language test. Language Testing, 20(3), 295-320.
  21. Clariana, R, & Wallance, P. (2005). Paper-based versus computer-based assessment: Key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602.
  22. Cizek, G. J., & Burg, S. S. (2006). Addressing test anxiety in a high-stakes environment: Strategies for classrooms and schools. Thousand Oaks, CA: Corwin
  23. Csapó, B., Ainley, J., Bennett, R., Latour, T., & Law, N. (2010). Draft white paper 3: Technological issues for computer-based assessment. Assessment and Teaching of 21st Century Skills, The University of Melbourne, Australia. Retrieved from http://www.atc21s.org/GetAssets.axd?FilePath=/Assets/Files/dc7c5be7-0b3a-4b7d-8408-cc610800cc76.pdf
  24. Columbus, A. M. (2008). Advances in psychology research. New York, NY: Nova Science Publishers.
  25. Dermo, J., & Eyre, S.  (2008). Secure, reliable and effective institution-wide e-assessment: Paving the ways for new technologies.  In F.  Khandia (Ed.),  Proceedings of  12th International  CAA Conference,  95 –106. Loughborough:  University of Loughborough.
  26. Education Commission of the States. (2010). Assessment: Computer-based. Retrieved from http://www.ecs.org/html/issue.asp?issueID=12&subIssueID=76.
  27. Erle, L., Benjamin, O., Einar, W. S., & Raymond, S. (2006). Computer-based versus pen and-paper testing: Students’ perception. Ann Acad. Med. Singapore, 35, 599-603.
  28. Florida Department of Education. (2006). What do we know about choosing to take a high-stakes test on a computer? Retrieved from http://www.fldoe.org/asp/k12memo/pdf/WhatDoWeKnowAbout ChoosingToTakeAHighStakesTestOnAComputer.pdf
  29. Gallagher, A., Bridgeman, B., & Calahan, C. (2002). The effect of computer-based test on racial-ethnic and gender groups. Journal of Educational Measurement, 39(2), 133-147.
  30. Gamire, E., & Pearson, G. (Eds.). (2006). Tech tally: Approaches to assessing technological literacy. Washington, DC: The National Academies Press.
  31. Gary, V. M., Jones. P., McNeil, H. P., & Kumar. R. K.. (2008). Integrated online formative assessments in the biomedical sciences for medical students: Benefits for learning. BMC Med Educ, 8, 52.
  32. George, Y. (2011). Students’ perception of computer-based assessment in the university of llorin, llorin, Nigeria. Retrieved from  www.scribd.com/doc/71979150
  33. Harris, H. L. & Coy, D. R. (2003). Helping students cope with test anxiety. ERIC Counselling and Student Services Clearinghouse. ERIC Identify, ED479355.
  34. Huberty, T. J. (2009). Test and performance anxiety. Principal Leadership, 10(1), 12-16.
  35. Ikeoji, C. N. & Onyekwuluje, C. O. (2017). Research difficulties confronting graduate students of Agricultural Education in Delta State University, Abraka. Journal of Education and Social Sciences, 6, (2), 365-375 ISSN 2289-9855
  36. Jimoh, R. G., AbdulJaleel,  K. S., &  Kawu, Y. K.,  (2012).  Students’  perception of computer-based test (cbt) for examining undergraduate chemistry courses. Journal of Emerging Trends in Computing and Information Sciences, (3)2, ISSN 2079-8407.
  37. John, C. K., Cynthia, G. P., Judith, A. S., &Tim, D. (2002). Practical considerations in computer-based testing. Sheridan Books. Lawrence Erlbaum Associates, New Jersey, USA.
  38. Kadel, C.  (2005). Innovation in education: The increasing digital world-issue of today and tomorrow. New Jersey: Lawrence Erlbaum Associates
  39. Kikis-Papadakis,  K., & Kollias, A. (2009). Reflections on paper-and-pencil tests to e-assessments: Narrow and broadband paths to 21st century challenges. In F. Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  40. Komba, S.C. (2016). Challenges of writing theses and dissertations among postgraduate students in Tanzanian higher learning institutions. International Journal of Research Studies in Education, 5 (3)71-80.
  41. Kozma, R. (2009). Transforming education: Assessing and teaching 21st century skills. assessment call to action. In F. Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  42. Kumar R. (2016). Research Methodology A Step-By-Step Guide For Beginners.
  43. Kyllonen, P.C. (2009). New constructs, methods, & directions for computer-based assessment. In Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  44. Lawson,  J.  D. (2006). Test anxiety: A test of attentional bias.  A Published Dissertation, University of Maine.
  45. Lee, M. (2009). CBAS in korea: Experiences, results and challenges. In F. Scheuermann & J. Bjórnsson (Eds.). The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  46. Leeson, H.V. (2006). The mode effect: A literature review of human and technological issues in computerized testing. International Journal of Testing, 6(1), 1–24.
  47. Lim, E., CH., Ong, B., Wilder-Smith, E., PV., Seet,  R., & CS.,  (2006).  Computer-based versus pen and-paper testing: Students’ perception.  Ann Acad Med Singapore, 35(9), 599-603.
  48. Martin, R. (2009). Utilising the potential of computer-delivered surveys in assessing scientific literacy. In F. Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  49. Moe, E. (2009). Introducing large-scale computerised assessment: Lessons learned and future challenges. In F. Scheuermann & J. Bjórnsson (Eds.). The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing. Luxembourg: Office for official publications of the European communities.
  50. Mohan, R. (2011). Research methods in education. Neelkamel Publications Pvt. Ltd New Delhi.
  51. Mubashrah, J., Tariq, R.H., & Shami, P.A. (2012). Computer-based vs paper-based examinations: perceptions of university teachers. The Turkish online Journal of Educational Technology (TOJET), 11(4), 371-381.
  52. Nkwocha, P.C. (2004). Measurement and evaluation in the field of education. Owerri: Versatile Publishers.
  53. Nunathap, G. (2007). Gender analysis of academic achievement among high school students. Thesis submitted to the University of Agricultural Sciences, Dhaward. Retrieved from http://www.etd.uasd.edu/ft/th9534.pdf
  54. Nwosu, K. C. (2012). Effects of reciprocal peer tutoring on the test anxiety and academic achievement of low achieving students. Unpublished M.Ed Thesis, School of Education, Nnamdi Azikiwe University, Awka, Nigeria
  55. Oduntan O.E., Ojuawo O.O., & Oduntan E.A. (2015). A comparative analysis of student performance in paper pencil test (PPT) and computer based test (CBT) examination system. Research Journal of Educational Studies and Review, 1 (1), 24-29.
  56. Odunze, D. I. (2019). Examining the Challenges Faced by Undergraduate Students in Writing Research Projects. DOI: 10.13140/RG.2.2.24476.64643 Retrieved from https://www.researchgate.net/publication/337567004
  57. Ogu, O. C., Agbanusi, E. C., & Umeasiegbu, G. O. (2008). Social psychological dynamics of sports. Onitsha: Ekumax Company Ltd.
  58. Ogunmakin, A. O., & Osakuade J. O. (2014). Computer anxiety and computer knowledge as determinants of candidates’ performance in computer-based test in Nigeria. British Journal of Education, Society &Behavioural Science, 4(4), 495-507.
  59. Oladimeji, B. Y. (2005). Psychological assessment techniques in health care. Ile-Ife: Obafemi Awolowo University Press Ltd.
  60. Osuji, U. S. A. (2012). The use of e-assessment in the Nigerian higher education system. Turkish Online Journal of Distance Education (TOJDE), 11(4).
  61. Paek, P. (2005). Recent trends in comparability studies. Retrieved from http://www.pearsonedmeasurement.com/downloads/researc h/RR_05_05.pdf
  62. Poggio, J., Glasnapp, D. R., Yang, X., & Poggio, A. J. (2005). A comparative evaluation of score results from computerized and paper & pencil mathematics testing in a large scale state assessment program. The Journal of Technology, Learning, and Assessment, 3(6). Retrieved from http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1057&context=jtla
  63. Pommerich, M. (2004). Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests. The Journal of Technology, Learning, and Assessment, 2(6). Retrieved from http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1002 &context=jtla
  64. Psychology Dictionary (2015). What is paper-and-pencil test? Retrieved from http://psychologydictionary.org/paper-and-pencil-test/
  65. Public Commission of Canada (2011). Paper-and-pencil instruments: An efficient method of assessment. Retrieved from http://www.psc-cfp.gc.ca/ppc-cpp/acs-cmptnce-evl-cmpinc/pp-instrmnt-pc-eng.htm
  66. Puhan, G., Boughton, K., & Kim, S. (2007). Examining differences in examinee performance in paper and pencil and computerized testing. The Journal of Technology, Learning, and Assessment, 6(3). Retrieved from http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1115&context=jtla.
  67. Putwain, D. W., & Daniels, R. A. (2010). Is the relationship between competence beliefs and test anxiety influenced by goal orientation? Learning and Individual Differences, 20, 8–13. http://dx.doi.org/10.1016/j.lindif.2009.10.006
  68. Putwain, D. W. (2008). Deconstructing test anxiety. Emotional and Behavioral Difficulties, 13(2), 141-155. http://dx.doi.org/10.1080/13632750802027713
  69. Rabinowitz, S., & Brandt, T. (2001). Computer-based assessment: Can it deliver on its promise? West Ed Knowledge Brief. ERIC Document Reproduction Service No. ED462447.
  70. Rana, R.A., & Mahmood, N. (2010).The relationship between test anxiety and academic achievement. Bulletin of Education and Research, 32(2), 63-74.
  71. Revuelta, J., Ximenez, M., Carmen, A., & Olea, J. (2003). Psychometric and psychological effects of item selection and review on computerized testing. Educational & Psychological Measurement, 63(5), 791-808.
  72. Rivkin, S.G., Hanushek, E.A., & Kain, J.F. (2005). Teachers, schools, and academic achievement. Econometrica, 73 (2), 417-548.
  73. Rogers, R.A. & Fleck, B.K.B.(2014).Teaching methods to overcome challenges in online graduate-level courses.  Journal of online doctoral education,1(1)89-100
  74. Sanni, A. A., & Mohammad, M. F. (2015). Computer based testing (cbt): an assessment of student perception of jamb utme in nigeria. Computing, Information Systems, Development Informatics & Allied Research Journal, 6(2) 14-28
  75. Scheuermann,  F.,  &  Bjornsson,  J.  (2009). The transition to computer-based assessment: New approaches to skills assessment and implications for large-scale testing.  Luxembourg:  Office for Official Publications of the European Communities.
  76. Schofield, J.W. (2006). Migration background, minority-group membership and academic achievement: Research evidence from social, educational and developmental psychology. AKI Research Review, 5. Retrieved on 25-07-2011 from http://193.174.6.11/alt/aki/files/aki_research review_5.pdf
  77. Schult, C. A., & McIntosh, J. L. (2004): Employing computer-administered exams in general psychology: Student anxiety and expectations. Teaching of Psychology, 31(3), 209-211.
  78. Segool, N. (2009). Test anxiety associated with high-stakes testing among elementary school children: Prevalence, predictors, and relationship to student performance. ProQuest, LLC
  79. Siddique G K., Zulfiqar M S., Khalid M., (2020). Difficulties while Conducting Research in Academia: Taking M.Phil Students’ Perspectives in Public and Private Universities, Journal of Arts and Social Sciences. 7(1), 89-95.
  80. Sorana-Daniela. B., & Lorentz, J. (2007). Computer-based testing on physical chemistry topic: A case study. International Journal of Education & Development using Information and Communication Technology. 3(1), 94-95.
  81. Stowell, J. R., & Bennett, D. (2010). Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research, 42(2), 161-171.
  82. Stringfield, S., Reynolds, D., & Schaffer, E.C. (2008). Improving secondary school students’ academic achievement through a focus on reform reliability. Retrieved on 16-12-2011 from http://www.cfbt.com/evidenceforeducation/PDF/HighReliability
  83. Sunzuma, G.,Zekewa ,N.& Bhukuvhani,C.(2012). Undergraduate Students’ Views on Their Learning of Research Methods and Statistics (RMS) Course: Challenges  and  Alternative  Strategies. International Journal of Social Science Tomorrow,1,(3)1-9
  84. Thompson, N. A., & Weiss, D. J. (2009). Computerized and adaptive testing in educational assessment. In F. Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing, Luxembourg: Office for Official Publications of the European Communities.
  85. Tucker, B. (2009). The next generation of testing. Educational Leadership, November 2009, 48-53.
  86. Van Lent, G. (2009). Risks and benefits of cbt versus pbt in high-stakes testing: introducing key concernsand decision making aspects for educational authorities. In F. Scheuermann & J. Bjórnsson (Eds.), The Transition to Computer-Based Assessment: New Approaches to Skills Assessment and Implications for Large-Scale Testing, Luxembourg: Office for Official Publications of the European Communities.
  87. Whitaker Sena, J. D., Lowe, P. A., & Lee, S. W. (2007). Significant predictors of test anxiety among students with and without learning disabilities. Journal of Learning Disabilities, 40, 360-376.
  88. Wikipedia (2012). Computer-based assessment. Retrieved from hup://c’n.wikipedia.org/uiki/Computer-based assessment
  89. Woods, K., Parkinson, G., & Lewis, S. (2010). Investigating access to educational assessment for students with disabilities. School Psychology International, 31(1), 21-41.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

257 views

Metrics

PlumX

Altmetrics

GET OUR MONTHLY NEWSLETTER