International Journal of Research and Scientific Innovation (IJRSI)

Submission Deadline-22nd November 2024
November 2024 Issue : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th December 2024
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th November 2024
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Using Fast Feedback Method to Enhance Students’ Learning in Chemistry: A Case Study in Yilo Krobo Senior High School

  • Peter Attafuah
  • 393-409
  • Sep 2, 2024
  • Education

Using Fast Feedback Method to Enhance Students’ Learning in Chemistry: A Case Study in Yilo Krobo Senior High School

Peter Attafuah, PhD

Adjunct Lecturer, Wisconsin International University College

DOI: https://doi.org/10.51244/IJRSI.2024.1108032

Received: 02 July 2024; Revised: 28 July 2024; Accepted: 01 August 2024; Published: 02 September 2024

ABSTRACT

Traditional practices by which teachers give feedback to students during classroom discourse have been challenged. The research found out how students get information about how well they are doing in chemistry class, which is the actual feedback that chemistry teachers give to their students. The study again suggested FAST feedback method as an ideal way of giving feedback to enhance students learning during chemistry lessons. FAST is an acronym for frequent, accurate, specific and timely.

An exploratory-survey design was adopted for the study and a sample of 865 chemistry students and 20 teachers, (n=885), responded to the instruments. 795 students and 20 teachers were involved in the pre-intervention survey and 70 students were involved in the post-intervention survey. Data was collected using questionnaires, interviews, worksheets, observation of chemistry lessons and students’ exercise books. Data collected were analyzed using SPSS to provide answers to the research questions; i) how often do chemistry teachers give feedback to their students? ii) do chemistry teachers give accurate feedback to their students? iii) to what extent can the feedback given by chemistry teachers be described as specific? iv) do chemistry teachers give timely feedback? v) what effect has FAST feedback on the learning of chemistry?

The research revealed the actual picture of feedback given by Ghanaian chemistry teachers in senior high schools as not frequent, accurate, specific nor timely. It was inferred from the learning outcomes and the students’ responses to the post-intervention questionnaire that FAST feedback method is ideal and can help students make positive impact on their chemistry learning.

INTRODUCTION

Background to the Study

Teachers have been supported, challenged and encouraged to think critically about their methods of teaching and to build on existing good practices. There has also been a need to align classroom and school-wide assessment with school systems, so that summative pressures do not undermine formative work (Smith & Gorard, 2005, Nicol, 2010).  A key component of formative assessment deals with teacher feedback to students and its complexities. Feedback gives specific information about current achievement, the next step (or goal) and how to reach that goal. It then requires thought and some kind of response or action from the student. When feedback is predominantly negative, studies have shown that it can discourage students’ effort and achievement (Hattie & Timperley, 2007).

Feedback is information provided by an agent (e.g., teacher, peer, book, parent, experience) regarding aspects of one’s performance or understanding. It occurs after instruction that seeks to provide knowledge and skills or to develop particular attitudes.  A teacher has the distinct responsibility to nurture a student’s learning and to provide feedback in such a manner that the student does not leave the classroom feeling defeated. Indeed, feedback is an integral part of motivation theory as are needs, goals, and rewards. Students seem to intuitively understand how important it is to receive ongoing performance feedback in order to sustain motivation and attain goals (Stobart, 2008). Teachers routinely and perhaps unconsciously rely on many types of feedback. Teachers give formal feedback when students are given grades and return assignments and examinations. Informal feedback is offered when teachers respond to students’ questions and discussions. Teachers also receive formal feedback from students when students complete end-of-term or course evaluations. Informal feedback is obtained from students when teachers detect a look of boredom or confusion from the back of the classroom.

Teachers’ reasons for wanting feedback about their teaching performances are usually a mixture of the personal and the professional. Every teacher is likely to be interested to know in a general way how he or she is doing and how things are going. While some are keen on having details that will help them consolidate good performances or make improvements, others need to be able to document the quality of their teaching skills (Duffield & Spencer, 2011).

Statement of the Problem

Teachers and indeed chemistry teachers in Ghanaian senior high schools (SHS), have been much slower to adopt feedback as a pedagogical tool. If teachers give feedback, it should be meaningful and they should deal with the tasks the students did and whether they did well or not. Several studies have found that students may not always understand the feedback comments they receive thereby undermining its learning and achievement potential (Weaver, 2006). The discourse include comments that are too vague, general, ambiguous, abstract, or in unfamiliar disciplinary discourse (Nicol, 2010).

The current situation of chemistry teaching and learning in Ghana is a concern to all including government and the society at large. Research indicates that many students found chemistry to be difficult, boring and not interesting to them (Yeager, 2014). Large class sizes, inadequate funding, insufficient curriculum resources, poor teaching skills and lack of supports for teaching among other factors further limit the quality of chemistry education (Yeager, 2014). One major technique that has been found to limit the teaching and learning of chemistry in Ghanaian senior high schools is the type of feedback which is given by teachers to their students. To solve these problems, there is the need to develop a realistic picture of the nature of feedback that is currently being practiced by teachers in Ghanaian chemistry classrooms and to identify the factors limiting the quality of feedback that is given. There is also the need to introduce a more quality method (FAST) of giving feedback as an intervention to help salvage the situation.

It is worthy of note that chemistry teachers do give feedback in classrooms but rather, they deal with how the students have performed. Their feedback does not deal directly with what is exactly wrong with the students’ responses and so does not enhance students’ learning of chemistry at the senior high school level. Although there are strong and consistent findings that feedback improves immediate performance under some circumstance, it is also clear that in some situations, feedback is irrelevant and sometimes even harmful. In a meta-analysis of research in educational, organizational, and laboratory settings, Kluger and DeLisi (2013) found that in one-third of the comparisons, the feedback condition had worse performance than the group who was given no feedback. This is as a result of the poor quality of feedback that was given. So much of teachers’ time is spent giving students feedback at the end of a particular course by commenting, correcting, and grading students work. This labour does not foster learning. It has been observed that giving FAST feedback in course of teaching a particular unit is good and enhances student learning.

Purpose of the study

The purpose of the study is to investigate the effectiveness of FAST feedback methods as an intervention in the teaching and learning of chemistry in senior high schools in Ghana.

Objectives of the Study

The main objectives of the present study include;

  1. To find out how often Chemistry teachers give feedback to their students.
  2. To determine whether chemistry teachers’ feedback to their students are accurate enough or not.
  3. To determine the extent to which feedback given by chemistry teachers can be described as specific.

Research Questions

The following research questions were addressed throughout the study;

  1. How often do chemistry teachers give feedback to their students?
  2. Do chemistry teachers give accurate feedback to their students?
  3. To what extent can the feedback given by chemistry teachers be described as specific?

LITERATURE REVIEW

Feedback

The term feedback can apply to a number of classroom situations and procedures, but here it refers to a range of techniques employed by the teacher to facilitate responses from the students to an exercise or task. Quality feedback should provide information to the student relating to the task or process of learning that fills a gap between what is understood and what is aimed to be understood (Hattie & Timperley, 2007).

Classification of Feedback Comments

Several authors have classified feedback comments in a number of ways (Hyatt, 2005; Hattie and Timperley, 2007; Brown and Glover, 2006). For example, analyzing 60 feedback commentaries of Masters level assignments in Educational Studies, Hyatt (2005) suggested seven types of feedback comments (each with sub categories) based on their purpose: Phatic, Developmental, Structural, Stylistic, Content-related, Methodological and Administrative comments. Phatic comments establish and maintain teacher student academic relationship. Developmental comments aid students with subsequent assignments. Structural comments address how an assignment is structured. Stylistic comments deal with the language and presentation style. Content-related comments, as the name implies, evaluate the appropriateness/accuracy of content. Based on function, Brown and Glover (2006) identified comments that are mere indications of the actual level of a student’s understanding or performance on an assignment, comments that provide corrections and those that provide explanations.

Feedback during Classroom Discourse

A variety of analytical skills can be fostered through the way that feedback is conducted. Learners not only need to know if their answers are correct, but also why they are correct or why they are making errors. Useful correction or reteaching may take place during feedback on exercises, while reading skills may be enhanced by identifying clues in a text or checking a listening task by referring to the tapescript. Students may also provide useful information by indicating which questions they found most difficult and why. Learners’ performance in tasks performs an important diagnostic function. Errors may indicate the need for clarification, reteaching or repair work, while successful completion of a task may indicate that learning has taken place and that the teacher is free to move on. However, repair is rarely accomplished by setting a similar task, while accurate conclusions can only be drawn from tasks that are manageable but achievable rather than too easy or too difficult.

The need for time-consuming whole-class feedback can be minimised by effective teaching and classroom management, not only during the activity but also in earlier stages of the lesson. Clearly, feedback is more speedily conducted when the majority of student responses are correct. Feedback is an ongoing process, and a good deal of gentle correction may take place while the teacher is monitoring, thus ensuring a minimum of feedback at the end of the task. The teacher may also notice specific difficulties and choose to conduct feedback only on problematic questions. Anticipating problems, grading tasks so that they are manageable and designating time for feedback rather than leaving it open-ended are all prerequisites for efficient feedback. There is an element of security in both teacher and learners knowing that an exercise has been completed satisfactorily.

Types of Feedback

Teacher’s feedback could be descriptive or evaluative. Descriptive Feedback is specific information in the form of written comments or conversations that help the learner understand what is needed to improve. Evaluative Feedback is a summary for the learner of how well he or she has performed on a particular task. This feedback is often in the form of letter grades, numbers, check marks, symbols and/or general comments such as “good,” “excellent,” or “needs help.” Evaluative feedback is judgmental or descriptive (that is achievement or competence related).

Evaluative and descriptive feedback

Evaluative feedback is subdivided as positive or negative feedback whilst descriptive feedback is also categorized as either specifying feedback or constructing feedback. Also positive feedback is subdivided into rewarding feedback or approving feedback whilst negative feedback is classified as either punishing or disapproving. On the other hand, specifying feedback under descriptive feedback is categorized as specifying attainment or specifying improvement whilst constructing feedback is subdivided into constructing achievement or constructing the way forward (table 1).

Evaluative and descriptive feedback

Several other studies also focused on, or used, different categories of feedback. The same categorizations were used as a framework for Hargreaves, McCallum and Gipps’ (2000) more recent research where they looked in detail at teachers’ teaching, assessment and feedback strategies in primary classrooms. The research took place in twenty schools with eleven teachers of year 2 and twelve teachers of year 6. In mid-1997, the researchers interviewed head-teachers and observed lessons and towards the end of 1997. They observed up to five lessons in each of the twenty three classrooms. They held post observation interviews and involved teachers in discussion about theories of learning.

In early 1998 there was a further visit to ten case study teachers. Two lessons were observed in each classroom and the teachers took part in a ‘Quote Sort’ activity. Teachers sorted fourteen quotes, which focused on teaching, assessment and feedback strategies, and on pupil learning. In mid-1998 the ‘Quote Sort’ activity was undertaken with the non-case study teachers and towards late 1998 there were focus group interviews in both LEAS. What they found was that, depending on how teachers perceived learning to come about, and what sort of learning they hoped to encouraged, teachers used a repertoire of feedback strategies in order to bring about transformation in learning. This work confirmed that teachers use a repertoire of feedback strategies that are easily placed on the Tunstall and Gipps’ (1996b) typology. They concluded that, in part, choice of feedback strategies depend on teachers’ beliefs about how children learn.

The difference between evaluative and descriptive feedback is also the focus of a study by Davies (2013). She argues that descriptive feedback supports learning because it reduces the uncertainty by telling students what is working and what is not. In contrast, she suggests, evaluative feedback, which is usually encoded (letters, numbers, other symbols) and includes praise, punishments and rewards, does not give enough information for students to understand what they need to do in order to improve. Kohn (2013) refers to this as “the praise problem” and states that while some approving comments are not only acceptable (but positively desirable) some are neither. He suggests that the difficulty could be because different people mean different things by ‘praise’ or ‘reward’ or ‘positive feedback’. He argues that: “young children don’t need to be rewarded to learn; at any age, rewards are less effective than intrinsic motivation for promoting effective learning; rewards for learning undermine intrinsic motivation” (p.96)’. Crooks (2010) agrees that praise should be used sparingly and where used should be task specific whereas criticism (other than simply identifying deficiencies) is usually counterproductive. He argues that feedback should be specific and related to the need (p. 469).

Written feedback

Ronayne’s (2013) research focused on written feedback and teachers were asked to give a particular type of feedback. He investigated eight separate occasions, across a range of subject and secondary school age groups (11-13 years), on which teachers marked their pupils work and gave written feedback. Each case study followed the same procedure. When the task was completed, the teacher marked the work with formative feedback (no grades) and then the comments were analysed.

After the students received the written feedback, they were questioned about the feedback they received. The categories Ronayne identified and used were ‘organizational’, ‘encouraging/supportive’, constructive’, ‘think’, and ‘challenging’. While these appear to be different, there are elements which are very similar to evaluative and descriptive feedback. He describes ‘organizational’ as dealing with such things as date, title and correction of spelling, with praise and ticks, and ‘think’ when the answer is not corrected nor is there any direct teaching, such as ‘unnecessary’.

These have clear similarities to evaluative feedback in that there is no focus on quality. He explained ‘constructive’ comments as showing how something could be done or built on, and ‘challenging’ as taking a task from explanation to evaluation. These categories are work focused and similar to descriptive feedback.

Positive feedback

In a similar way, Hattie and Timperly (2007) talk about forms of feedback that are positive, such as reinforcement, corrective feedback, remediation and feedback, diagnoses and feedback and mastery learning. They also discuss immediate (often verbal) versus delayed (often written) and less effective forms of feedback such as extrinsic rewards, and punishment. The effectiveness of these forms of feedback was also a discussion point for Gipps and Simpson (2005).

Informal and formal feedback

Feedback can be seen as informal (for example, in day to day encounters between teachers and students or trainees, between peers or colleagues) or formal (for example as part of written or clinical assessment). However, “there is no sharp dividing line between assessment and teaching in the area of giving feedback on learning” Rowe & Wood (2008).  Feedback is part of the overall dialogue or interaction between teacher and learner, not a one way communication.

If feedback is not given, students may think that everything taught is alright and that there are no areas for improvement. Learners value feedback, especially when it is given by someone credible who they respect as a role model or for their knowledge, attitudes or clinical competence. Failing to give feedback sends a non-verbal communication in itself and can lead to mixed messages and false assessment by the learner of their own abilities, as well as a lack of trust in the teacher.

RESEARCH METHODOLOGY

The Research Design

An exploratory-survey design was adopted for the study. An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome (Lowanto, 2019). The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue (Lowanto, 2019).

The goals of exploratory research are intended to produce the following possible insights: Familiarity with basic details, settings, and concerns; well-grounded picture of the situation being developed; generation of new ideas and assumptions; development of tentative theories or hypotheses; determination about whether a study is feasible in the future;  issues get refined for more systematic investigation and formulation of new research questions; direction for future research and techniques get developed (Lowanto, 2019).

A quantitative approach involving a quantitative data from different sources was used to corroborate findings in this study. As a methodology, it involves philosophical assumptions that guide the direction of the collection and analysis of data. As a method, it focuses on collecting, analyzing numerical data in a single study or series of studies. Its central premise is that the use of quantitative approaches in combination provides a better understanding of research problems (Creswell, 2012).

Population

The study was conducted in the eastern region of Ghana. Students representing a cross section of mainstream public and private senior high schools in Ghana were included in the sample, which was believed to be representative of the target population, typical of senior high schools located in the Eastern Region of Ghana. Preliminary study aimed at identifying the actual (existing) types of feedback giving in chemistry classrooms cut across twenty senior high schools. The population consisted of chemistry teachers and their corresponding students from senior high schools in Ghana.

After the preliminary study, the first year chemistry class from Yilo Krobo SHS in the Yilo Krobo Municipal Directorate of Education in the Eastern Region of Ghana was selected for the intervention due to the multicultural nature of the participants, proximity and associated cost. The researcher in consultation with the chemistry teacher selected chemical bonding as a topic for which the researcher interacted with the students incorporating FAST feedback which yielded positive results.

Table 3: Public and Private Schools in Yilo Krobo Municipal Educational Directorate.

Level Public Private Total
Kindergarten 79 30 109
Primary 79 30 109
Junior High School 42   4 46
Senior High School   2   1 3
Total 202 65 267

A letter (Appendix A) describing the study rationale and procedure was sent to the headmasters of the respective senior high schools. Following this, a similar letter was sent to the heads of the science departments in these participating schools (Appendix A). Data was collected in November, 2010.

Sample and Sample Size

Purposive sampling approach was used to select student participants for the study (Oliver, 2013). A form of non-probability sampling in which decisions concerning the individuals to be included in the sample are taken by the researcher, based upon a variety of criteria which may include specialist knowledge of the research issue, or capacity and willingness to participate in the research (Oliver, 2013). In purposive sampling, participants are handpicked and in this case the chemistry teachers helped the researcher to do the sampling since the teachers were familiar with the students. Student participants were selected from SHS1 in all the participating schools. The basic assumption for selecting schools for the study was that the schools had similar characteristics such as chemistry teachers who had been teaching for at least two years, average age of student participants was 16 years, and same curricula to follow.

The region in general is cosmopolitan where the people had almost all social groupings, different ethnic groupings, some children were found in deprived schools whilst others were in endowed schools and came from diverse home backgrounds. Participating schools included schools from both rural and urban areas. Also participants were selected from single sex schools as well as co-educational institutions. The sample size was 885 (n=885) comprising 795 students and 20 teachers for the pre-intervention study and 70 students for the post- intervention study. The sampling frame is presented in Table 4.

Table 4 The Sampling Frame for the Participants Survey (Pre and Post Intervention)

School Location  
School Type Rural Urban  
Males Females Males Females Total
Co-educational      95 65 185   90    435
Boys Only        –   – 250   –    250
Girls Only        –   – 180    180
Teachers      4  1   11     4      20
Total      99  66 446 274    885

Participants as shown in table 4, indicated their interest by completing and returning the consent form were selected to participate in the survey. Using purposive sampling method helped to obtain a sample of participants who were informed and also provided a range of important perspectives to the research. It is worthy of note that the contribution of the key stakeholders was crucial to this study.

Instrumentation

Three main instruments were developed for data collection. The first instrument was questionnaire for both teachers and students (Appendixes C and D respectively). The second instrument was teachers’ interview protocol (Appendix B) of twenty (20) items based on the qualities (FAST) of feedback. The third instrument was the Observation schedules (Appendices E, and F,) which were also used to collect data from students note books and from chemistry lessons. The combination of these approaches ensured triangulation. Triangulation according to Rothbauer (2008)) is “getting the data through a variety of different strategies so as to strengthen and verify the research findings”.

These instruments sought to collect base line data on the existing ways by which chemistry teachers give feedback to their students. Both teachers and students participated in this initial data collection which helped to establish a relationship between the responses of the teachers and the responses from their respective students. These were followed by classroom tuition using the procedures of FAST feedback methods to help students towards effective learning. The underlying principle of the FAST feedback methods is the frequency by which the teacher poses problems for students to respond throughout the lesson and the immediate feedback that teachers give to students.

After every topic, the students were given class test and the outcome of students’ performances were used to determine the effect of this method of teaching. Students were also given post intervention questionnaire to respond to which also buttressed their results in the class tests. Drawing inspiration from the following instruments which have been described under the theoretical framework, the instruments for the study were developed.

Teacher Survey Questionnaire

The teacher questionnaire comprises five sections. The first section elicited information on demographic data regarding the teachers’ age, qualification, years of teaching experience, area of teaching specialization, and class size. The other four sections focused on how the teacher provides feedback with reference to the qualities (FAST) of feedback. The scales forming the core around which the Items were developed are Frequent, accurate, specific and timely (FAST) feedback. Thus, Items under each scale aimed at finding out from the chemistry teachers how they give feedback to their students during chemistry class.

Each Item had weights for the Likert scale ranging from five (5) to one (1). The respondents are to tick in the box corresponding to how the chemistry teacher provides feedback to students. If the respondents strongly agree (5) with the item statement, they will make a tick in the box closer to it and if they agree (4) then a tick will be made in the box corresponding to it. On the other hand if they disagree (2) with the item, they will make a tick in the box for that or if they strongly disagree (1) they will tick in that box. However, if they are not sure (3) they will provide a tick in the corresponding box.

Student Survey Questionnaire

The purpose of the student survey was to investigate students’ perception about how their chemistry teachers give feedback during classroom discourse. The questionnaire comprises five sections. The first section asks for demographic data, including, age, students’ school (boys/girls/coeducational), year / level and class size. The other sections inquired from students, how their chemistry teachers give feedback as to whether the feedback given, have the qualities (FAST) feedback. Seven items were developed under the scale frequent, and four items were developed under the scale accurate. Also, five items each were developed for the attributes, specific and timely. Student respondents were expected to place a tick in the box corresponding to the likert scale that measured how they perceived their chemistry teachers’ feedback to students in class.

RESULTS

Demographic Data

The pre-intervention investigations were conducted in twenty SHS in the Eastern Region of Ghana which were either single-sexed or co-educational; rural or urban. Teachers who teach and students who studied chemistry participated. The data on the analysis of participants are shown in terms of frequency in Table 8.

Table 8 Number of participants in the different types of schools. (n=815)

School Location
School Type Rural Urban         
 Males  Females  Males     Females    Total
Students Co-educational   85     55       175      80     395
Students Boys Only   –      –       240       –     240
Students Girls Only    –      –       –      160     160
Teachers Co-educational   4      1       9      3     17
Teachers single-sexed    –      –       2      1     3
Total Participants  89     56       426      244     815

Participants in the Different Types of Schools

From Table 8, 395 students from co-educational institutions representing 48.5% of the total participants took part in the study. This was made up of 85 (10.4%) males and 55 (6.7%) females from rural schools and 175 (21.5%) males and 80 (9.8%) females also from urban schools. Single-sexed male schools had 240 (29.45%) students and single-sexed female schools had 160 (19.6%) student-participants. Thus, 795 (97.55%) students’ participated in the pre-intervention study. Teachers who participated in the study were 20 (2.45%) out of which 17 (2.08%) came from co-educational institutions and three (0.4%) came from rural schools. Schools from rural areas which participated in the study were all co-educational. The intervention exercise was conducted with 70 first year chemistry students comprising 27 females and 43 males drawn from Yilo Krobo SHS in Somanya in the Eastern Region of Ghana which brought the total sample size to 885 (n = 885). These students offered chemistry, mathematics, physics, biology or agricultural science as elective subjects and were classified as students offering science programme or agricultural science programme.

Key Finding 1

A total of 885 participants took part in the study. Out of this 815 comprising 795 students and 20 teachers from co-educational, single-sexed as well as rural and urban institutions participated in the pre-intervention study. None of the rural schools was single sexed. The intervention exercise involved 70 first year chemistry students from Yilo Krobo SHS.

Personal Details of Teacher Participants

Details of teacher-participants have been analysed in terms of teaching experience, subject specialization and highest qualification and have been presented in Table 9.

Table 9 Details of teacher participants

Teaching Experience (years) Subject Specialisation Highest Qualification
5 and above Chemistry (7) BSc; PGDE (Chem) 3

B Ed (Chem) 4

4-5 Physics (2) B Sc (Phy) 1

B Ed (Phy) 1

3-4 Biology (5) B Sc (Bio) 3

B Ed (Bio) 2

2-3 Agricultural science (4) BSc (Agric. Sc.) 3

B Ed (Agric) 1

1-2 Mathematics  (1) B Sc (Math) 1
1 Engineering (1) B Sc (Eng) 1

From Table 9, only seven out of the 20 teacher participants studied chemistry and the rest 13 were non chemistry teachers who studied biology, physics, agricultural science, mathematics, or engineering. The first seven teachers were trained and certified as teachers of chemistry. Three out of these seven teachers, first trained only in Content Knowledge (CK) and later studied for the Post Graduate Diploma in Education (PGDE) which equipped them in the skills of teaching. The other three studied both Content Knowledge (CK) and Pedagogical Knowledge (PK) alongside (B Ed).  Whereas these seven professional chemistry teachers had taught for a period of five years or more, the thirteen  non-professional chemistry teachers had teaching experiences of between one and five years.

Key Finding 2

Of the 20 teacher-participants seven were professionals and had taught for five or more years and 13 were non-professionals and had been teaching for five years. The non-professionals were trained in physics, biology, mathematics, agricultural science or engineering.

Pre-intervention Results

The pre-intervention results covered the responses by 795 students’ and 20 teachers to the questionnaires on FAST feedback. Teachers’ responses to the interview questions and results of the observation schedules A and B, provided triangulation to the study.

Students’ Responses to Questionnaire on Frequent Feedback

Items F1-F7 of the questionnaire (Appendix C) required students to provide responses to Items on frequent feedback. The responses have been analysed in terms of frequencies, means, SD and Cronbach’s alpha and presented as Table 10.

Table 10 Students’ Responses to Items on Frequent feedback (n=795)

Note: Figures in brackets are the respective percentages. The Items are ranked by decreasing order of means. Standard deviation =SD.  Chronbach’s alpha for the seven Items= 0.941.

From Table 10, Item F4 had the highest mean of 2.23 and Item F7 followed with a mean of 1.50. Items F6, F1, F3 and F2 respectively followed with means of 1.50, 1.18, 1.15, 1.13, and 1.12. Item F5 had the least mean of 1.03. All the seven Items had Cronbach’s alpha value of 0.941 and so are reliable because it is greater than 0.75.

Key Finding 3

Students’ means to the Items on frequent feedback were, “teachers: respond to students questions (2.33); do not comment on what students feel need to be changed (1.50);do not always comment on what students like about the lessons (1.18);feedback for a lesson is not more than four (1.15); do not return marked class test once a week (1.13); teachers do not return assignments once a week (1.12); make no comments on what students say they enjoyed about lessons (1.03).”

Teachers’ responses to frequent feedback

Items F1-F7 of the questionnaire (Appendix B) required teachers to provide responses for the attribute frequent feedback. Teachers’ responses were analysed in terms of frequency, percentages, means, standard deviations and Cronbach’s alpha and presented in Table 11.

Table 11 Teacher self-ratings on the attributes of frequent feedback (n=20)

Items Agree Not Sure Disagree Mean SD Cronbach Alpha
F4. I respond to every question students ask 15(75) 0(0) 5(25) 2.50 1.00 0.945
F5. I comment on what students enjoyed. 15(75) 0(0) 5(25) 2.50 1.00 0.950
F6. I comment on what students did not like 10(50) 10(50) 0(0) 2.50 0.58 0.953
F7. I always remark on what students feel need to be changed. 10(50) 5(25) 5(25) 2.25 0.96 0.946
F1. I give more than four feedback in a lesson 5(25) 10(50.0) 5(25) 2.00 0.58 0.941
F2. The rate I return marked assignment is once a week. 0(0) 10(50) 10(50) 1.50 0.96 0.937
F3. The rate at which I return marked class test to students is once every fortnight 0(0) 0(0) 20(100) 1.00 0.44 0.952

Note: Figures in brackets are the respective percentages. The Items are ranked by decreasing order of means. Standard Deviation =SD. Cronbach’s alpha (CA) for the seven Items= 0.946

From Table 11, Items F4, F5 and F6 had the highest mean of 2.50 each whilst Item F7 followed with a mean of 2.25. Item F1 with a mean of 2.00 was the next and Item F2 followed with 1.50. The final Item was F3 with a mean of 1.00. All Items under the attribute frequent feedback had Cronbach’s alpha of 0.946 > 0.75 and so are reliable.

Key Finding 4

Means of teachers responses to the Items with respect to frequent feedback were: teachers respond to every question students ask (2.50); teachers comment on what students enjoy and did not like (2.50); teachers comment on what students did not like (2.50); teachers remark on what students feel need to be changed (2.25); the number of feedback teachers give during any lesson is more than four was (2.25); teachers’ do not return marked assignments to students once every week (1.50); teachers are not able to return marked class test every fortnight (1.00).

Comparison between Students and Teachers Means to Frequent feedback

Mean scores to the Items of the questionnaires in respect of the attribute frequent feedback have been analysed as teachers mean, students mean and variance and presented as Table 12.

Table 12 Teachers and Students Means for Frequent Feedback compared

 Item F1 F2 F3 F4 F5 F6 F7
Teachers Mean 2.00 1.50 1.00 2.50 2.50 2.50 2.25
Students Mean 1.51 1.12 1.13 2.23 1.03 1.18 1.50
Variance 0.85 0.38 -0.13 0.27 1.47 1.32 0.75

From Table 12, teachers’ scores were higher than students scores in all the Items except Item F3 where teachers mean was 1.00 and students mean was 1.13 which gave a variance of -0.13.

Students’ Responses to Questionnaire on Accurate Feedback

Item statements A8-A11 from the questionnaire (Appendix C) required students to provide responses in respect of the attribute accurate feedback. Students’ responses have been analysed in terms of frequency, percentages, means, standard deviations and Chronbach’s alpha and presented in Table 13.

Table 13 Student ratings of the attribute, Accurate feedback (n=795)

Items Agree Not Sure Disagree Mean SD CA
A10.What teacher writes in students books concern quality of work 55(6.9) 90(11.3) 650(81.8) 1.25 0.96 0.978
A9. What teacher writes in students exercise books concern the level of their performance 45(5.7) 100(12.5) 650(81.8) 1.24 0.98 0.982
A11. Teachers feedback concern students style to answer questions 45(5.7) 35(4.4) 715(89.9) 1.16 0.92 0.980
A8. Teacher writes comments on every mistake students  make in their exercise books 50(6.3) 20(2.5) 725(91.2) 1.15 0.89 0.980

Note: Figures in brackets are percentages. The Items are ranked by decreasing order of means.  Standard Deviation =SD. Cronbach’s alpha for the Items= 0.980

From Table 13, Item A10 had the highest mean of 1.25 followed by Item A9 with a mean of 1.24.  Item A11 also followed with a mean of 1.16 and A8 had the least mean of 1.15. All Items had Cronbach’s alpha of 0.980 > 0.75 and so the Items are said reliable.

Key Finding 5

Students’ means to the Items on accurate feedback were as follows: what teachers write in students exercise books (i) concern quality of work (1.25); (ii) reflects level of their performance (1.24); (iii) do not depict the style of approach used to answer questions (1.16); (iv) have nothing to do with the mistakes they make in the answers they provide to teachers’ questions (1.15).

Teachers’ responses to Questionnaire on accurate feedback

Items A8-A11 of the questionnaire (Appendix B) required teachers to provide responses to accurate feedback. Teachers’ responses have been analysed in terms of frequencies, means, and SD and Cronbach’s alpha and presented in Table 14.

Table 14 Teacher self-ratings of the attribute Accurate feedback (n=20) 

Items Agree Not Sure Disagree Mean SD      CA
A10. What I write in students exercise books concerns the quality of work 15(75) 5(25) 0(0) 2.75 0.50 0.851
A9. What I write in students exercise books concerns the level of their performances 10(50) 10(50) 0(0) 2.50 0.58 0.850
A8.I write comments on every mistake students make in their exercise books 15(75) 0(0) 5(25) 1.20 1.50 0.851
A11.Feedback I give concern students style to answer questions 5(25) 10(50) 5 (25) 1.20 0.82 0.852

Note: Figures in brackets are the respective percentages. The Items are ranked by decreasing order of means.  Standard Deviation =SD. Cronbach’s alpha (CA) for the seven Items= 0.851

From Table 14, Item A10 had the highest mean of 2.75 whilst Item A9 followed with a mean of 2.50. Item A8 followed with 1.20 whilst A11 finally came with a mean of 1.20. All the Items had an alpha value of 0.851>0.75 so the Items are considered reliable.

Key Finding 6

Means of teachers responses to the Items with respect to accurate feedback were as follows: teachers write comments on students mistake (1.20); Feedback concerns level of their performances (2.50); what teachers write concern the quality of students’ work (2.75); what teachers write concern style of approach to answer questions (1.20).

Comparison between Students and Teachers Means to Accurate Feedback

Means of respondents to the Items of the questionnaires for accurate feedback have been analysed as teachers mean, students mean and variance and presented as Table 15.

Table 15 Teachers and Students Means for Accurate Feedback compared

Item A8 A9 A10 A11
Teachers Mean 1.20 2.50 2.75 1.20
Students Mean 1.15 1.24 1.25 1.16
Variance 0.05 1.26 1.50 0.04

From Table 15, Teachers’ means were higher than students’ means in all the Items (A8-A11). The respective variances were 0.05, 1.26, 1.50 and 0.04.

Key Finding 7

Teachers’ means in all the Items were higher than the students’ means however, in Items A8 and A11, both teachers and students’ means were very close as shown by the variances.

Students’ responses to Questionnaire on Specific feedback

Items S12-S16 of the questionnaire (Appendix C) required students to provide responses in respect of specific feedback. Students’ responses have been analysed in terms of frequencies, means, standard deviation and Chronbach’s alpha values and presented in Table 16.

Table 16 Students’ ratings of the attribute specific feedback (n=795)

Note: Figures in brackets are the respective percentages. The Items are ranked by decreasing order of means.  Standard Deviation =SD. Chronbach’s alpha for the seven Items= 0.980

From Table 16, Item S13 had the highest mean of 2.40 followed by Item S14 with a mean of 1.69. Item S12 with a mean of 1.30 was the next whilst Item S15 followed with a mean of 1.18. S16 with a mean of 1.16 was the final Item and the Cronbach’s alpha value obtained for all the Items was 0.980 > 0.75 and so all Items were considered reliable.

Key Finding 8

Means of students to specific feedback were: teachers feedback is (i) directed toward what is done correctly (2.40); (ii) directed toward students’ responses in class (1.69); (iii) directed towards what is not done correctly (1.30); (iv) provides correct answers to omissions (1.18); (v) provides correct answers to wrong responses by students (1.16).

Teachers’ responses to Questionnaire on Specific feedback

Items S12-S16 of the questionnaire (Appendix B) required teachers to provide responses in respect of specific feedback. Teachers’ responses have been analysed in terms of frequencies, mean score, standard deviation and alpha values and presented in Table 17.

Table 17 Teachers’ self-ratings on specific feedback (n=20)

Items Agree Not Sure Disagree Mean SD CA
S16. Feedback I give is directed towards what is not done correctly 10(50) 5(25) 5(25) 2.25 0.50 0.903
S15. Feedback I give provide correct answers to students’ omissions. 10(50) 5(25) 5(25) 2.25 0.96 0.901
S14.Feedback I give is directed toward students responses in class 10(50) 5(25) 5(25) 2.25 0.96 0.899
S12.Feedback I give provide correct answers to wrong responses 10(50) 0(0) 10(50) 2.00 0.50 0.903
S13.Feedback I give is directed towards what is done correctly 0(0) 10(50) 10(50) 1.10 0.58 0.902

Note: Figures in brackets are the respective percentages. The Items are ranked by decreasing order of means.  Standard Deviation =SD. Cronbach’s alpha (CA) for the seven Items= 0.902.

From Table 17, Items S16, S15 and S14 had the highest mean of 2.25 each. Item S12 followed with a mean of 2.00 and finally Item S13 followed with a mean of 1.10. All the Items had alpha value of 0.902 which indicates reliability because it is greater than 0.75.

Key Finding 9

Means of teachers to specific feedback were: Teachers feedback (i) are directed towards what is not done correctly (2.25); (ii) provide correct answers to students’ omissions (2.25); (iii) directed toward responses in class (2.25); (iv) provide correct answers to wrong responses (2.00); (v) directed towards what is done correctly (1.10).

Comparison between Students and Teachers Mean to Frequent Feedback

Means of Items to the questionnaires in respect of specific feedback have been presented as teachers mean, students mean and variance and presented as Table 18 for comparison.

Table 18 Teachers and Students Means for Specific Feedback compared

From Table 18 scores of teachers are higher than students’ except Item S14 where teachers score was 1.00 and students’ score was 1.13 with a variance of -0.13.

Key Finding 10

Responses from both teachers and students in respect of the attribute specific feedback are at variance. Teachers’ means were higher than the means of students in all the Items except in Item S14.

Teachers’ Responses to Questionnaire Timely feedback

Items T17-T21 of the teachers’ questionnaire (Appendix B) required teachers to provide responses in respect of the attribute timely feedback. Teachers’ responses have been analysed in terms of frequencies, mean score, standard deviation and Chronbach’s alpha values and presented in Table 20.

Table 20 Teachers’ ratings on timely feedback (n=20)

Items Agree Not Sure Disagree Mean SD CA
T17. I return marked assignments to students a week after submission 10(50) 10(50) 0(0) 3.00 0.50 0.949
T20. I go round to provide clues to students working in class 10(50) 5(25) 5(25) 2.25 1.71 0.950
T19. I react promptly to students’ responses 10 (50) 0(0) 10(50) 1.75 1.50 0.941
T21. I stay longer with students who struggle in class to mark their work. 10(50) 0(0) 10(50) 1.75 1.50 0.949
T18. I return students assignments long time after submission 0(0) 5(25) 15(75) 1.25 0.50 0.948

From Table 20, Item T17 had the highest mean of 3.00 and was followed by Item T20 with a mean of 2.25. Items T19 and T21 followed directly with a common mean of 1.75 whilst Item T18 came last with a mean of 1.25. All Items under this attribute had an alpha value of 0.949 which was greater than 0.75 and so all the Items were considered reliable.

Key Finding 12

Means of teachers responses to the Items with respect to timely feedback were: Teachers return marked assignments to students a week after submission (3.00); teachers go round to provide clues to students working in class (2.25); teachers react promptly to students’ questions (1.75); teachers stay longer with students who struggle in class had a mean of (1.75); teachers return students marked assignments long time after submission (1.25).

Comparison between students and teachers mean scores to timely feedback

Mean scores to the Items of the questionnaires in respect of timely feedback have been presented as teachers mean, students mean and variance and presented as Table 21 for comparison.

Table 21 Teachers and Students Means for Timely Feedback compared

In all the Items as shown in Table 21, means of teachers’ were higher than students. Items T17, T18, T19, T20 and T21 respectively had teachers’ means of 3.00, 1.25, 1.75, 2.25 and 1.75 with corresponding students’ means of 1.21, 2.94, 1.28, 1.24 and 1.15. The respective variances are 1.79, -1.69, 0.47, 1.01 and 0.60.

Key Finding 13

Teachers and students responses for timely feedback are at variance in all the Items. Teachers’ means were higher than students’ means except T18.

Correlation coefficients among the attributes of FAST feedback

Gardner and Martin (2007) and Jamieson (2004) contend that Likert data is of an ordinal or rank order nature and hence only non-parametric tests will yield valid results. However, Norman (2010) using real scale data found that parametric tests such as Pearson correlation and regression analysis can be used with Likert data without fear of “coming to the wrong conclusion” as Jamieson (2004) puts it. Path coefficients are standardized because they are estimated from correlations (a path regression coefficient is unstandardized). In this study the correlation coefficients describe the strength of the relationship among the attributes of FAST feedback for which Items both students and teachers responded to.

Correlation Coefficient for Path Analysis (Students Responses)

The responses by the students to the attributes of FAST feedback have been analysed in terms of correlations and presented as Table 22.

Table 22 Correlation among the attributes of FAST feedback (Students’ Responses)

Attribute Frequent Accurate Specific Timely
Frequent  0.000  0.256 -0.396  0.096
Accurate  0.256  0.000  0.558 -0.154
Specific -0.396  0.558  0.000  0.736
Timely  0.096 -0.154  0.736  0.000

An examination of the simple correlation figures in Table 22 indicates that the relationship between some of the attributes of FAST feedback are weak and others are strong. Where the correlation is less than 0.5 (r<0.5) the relationship is weak and where the correlation is greater than 0.5 (r>0.5), the relationship is strong. The implication here is that students’ perceive the attributes (FAST) of their teachers’ feedback in class differently. Thus from Table 19, the relationship between the attributes frequent-accurate, frequent-specific, frequent-timely, and accurate-timely are all weak whilst the relationship between accurate-specific and specific-timely are strong.

CONCLUSION

To crown it all, the pairs of attributes with positive correlation values (frequent-accurate, frequent-timely accurate-specific, specific-timely), have the tendency of increasing or decreasing together. That is to say students’ perception of both attributes are the same. Students for instance think that their chemistry teachers’ feedback in class is not accurate enough and so is their thought hence the relationship between accurate-specific and specific-timely are strong.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

17 views

Metrics

PlumX

Altmetrics

GET OUR MONTHLY NEWSLETTER