Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.
Students’ Evaluation of Learning Outcomes for Quality Assurance in the Teaching Process at Mountains of the Moon University, Fort Portal, Uganda in East Africa
- David Katende
- 1764-1784
- Jul 11, 2024
- Education
Students’ Evaluation of Learning Outcomes for Quality Assurance in the Teaching Process at Mountains of the Moon University, Fort Portal, Uganda in East Africa
David Katende
Lecturer Faculty of Education, Department of Educational Planning, and Management, Mountains of the Moon University
DOI: https://dx.doi.org/10.47772/IJRISS.2024.803127S
Received: 17 May 2024; Revised: 29 May 2024; Accepted: 04 June 2024; Published: 11 July 2024
ABSTRACT
This study employed a mixed-method approach, with a predominance of quantitative analysis to probed user relevance of students’ evaluation of course learning outcomes for quality assurance in the teaching process at Mountains of the Moon University. Self-Administered online Questionnaires with kobo toolbox were used to specifically answer the following question: What is the relevance of students’ evaluation of course learning outcomes (problem solving skills, critical thinking skills, leadership skills, effective communication), as an aspect of quality teaching at Mountains of the Moon University? The study employed a case study design and respondents comprised of 170 undergraduate students in both second and third year and 13 post graduate students at least in their second semester of study. Collected data using online questionnaires with KoBo tool box and documentary review, were analyzed using SPSS. Findings confirm the reliability of SETs as tools for quality assurance in teaching, highlighting the importance of focusing on learning outcomes and quantifiable performance indicators to develop effective SET tools. Designing of Teaching and learning assessment tools has to be guided by clear objectives developed by the Bloom Taxonomy action verbs; that should be used in designing appropriate learning outcomes. The insights emphasize that good teaching should focus on appropriate learning outcomes and that SETs are reliable tools for measuring teaching quality in higher education. Quantifiable performance indicators are essential for developing effective SET tools, aligning with quality assurance policies. The study recommends that, regulatory bodies should benchmark with the findings in this study as a basis for building candid evaluation of teaching guidelines for Higher Education Institutions.
Keywords: Students’ Evaluation of teaching Tool, course learning outcomes, Quality teaching and Quality Assurance.
INTRODUCTION
This article presents a background, indicating the historical, conceptual, contextual, theoretical perspectives of the study, problem statement, purpose, scope and significance of the study.
The use of matrices to measure excellence in educational settings has been highly contested due to the inherent challenges of quantifying something as abstract as teaching excellence (Harfold, 2014). The White Paper on higher education teaching excellence, social mobility, and student choice highlighted the diverse matrices used globally to assess teaching quality (Greatbatch & Holland, 2016). According to Blackmore et al. (2016), assessing teaching quality to aid students and employers in comparing higher education institutions (HEIs) is a complex issue for the government.
Teaching quality is crucial in student decision-making. It encompasses learning environments, student support, course design, career preparation, and ‘soft skills,’ alongside classroom activities, significantly impacting student outcomes (BIS, 2016). Quality assurance (QA), defined as the process of establishing stakeholder confidence that educational provisions meet expectations or minimum requirements, is a term with varied interpretations depending on stakeholder perspectives (Bobby, 2014). Student Evaluation of Teaching (SET), synonymous with terms like Student Evaluation of Educational Quality (SEEQ) or student course evaluation, involves using student feedback to gauge teacher performance and attitude (Chen, 2016).
Contextual Perspective of the Study
Mountains of the Moon University (MMU), a community university founded on June 28, 2002, received a provisional license in March 2005 from the National Council for Higher Education (MMU Charter Document, 2018). It was granted a charter and subsequently directed to transition into a public community university in January 2018, officially becoming the eleventh public university in 2022 (Vice Chancellor’s Report to the Task Force, 2019). This study focuses on MMU students, who have significant experience and knowledge of evaluating course learning outcomes through student evaluations.
Problem Statement
Several studies in the international context have examined how student assessment has gradually become institutionalized in recent decades (Darwin, 2016). Studies observe that one of the important mechanisms of student assessment of quality is their rating, which performs a significant function in driving improvement in pedagogical practices in higher education. However, the role and functional purpose of this method have become increasingly confused and contested due to the rise of market-based models in higher education (Darwin, 2016). Furthermore, studies on students’ appraisal of the quality of higher education and critical reflections on it have been quite limited in the Ugandan context (Geoff &Tashmin, 2017).
Existing literature evidently indicates that a number of evaluation studies of teaching quality in higher education have majorly focused on the general relevance of students’ evaluation of teaching and thus impose a tool which probably is a product with limited user (students’) input hence bringing into spotlight the need to probe its level of relevance before users (students) (Geoff &Tashmin, 2017). It is statistically evident that over 45% of students ignore key question items on the assessment tool, hence affecting the quality of evaluation feedback reports on teaching and learning (Mountains of the Moon University Annual Quality Assurance Reports on teaching and Learning, 2016 & 2017). If this matter is not addressed, then the evaluation process may be rendered an irrelevant routine practice, as it is the case with most Universities in Uganda, Mountains of the Moon University inclusive. Although studies identify four key areas that inform the students’ assessment tool for quality of teaching (teaching methods, evaluation pattern of students, learning outcomes, teacher’s characteristics) (Yossi, et-al., 2020). This article is focused on assessing the user relevancy of students’ assessment of course learning outcomes for quality assurance in teaching.
Conceptual Framework
According to Adom & Hussein (2018) a conceptual framework is a structure which the researcher believes can best explain the natural progression of the phenomenon. A conceptual framework provides an illustration of the interrelated ideas or aspects of the variables/constructs, and often organized using existing models (Adom & Hussein, 2018). Below is the frame work:
Figure 1: Conceptual framework model
Source adapted from: (Fisk et al, 2014, p. 151)
The model above is a one-to-one relationship, illustrating the IV as Students’ Evaluation of teaching and DV as Quality Teaching Attributes. The Independent variable constitutes four aspects and these are; problem solving skills, critical thinking skills, leadership skills, effective communication. Assessing Problem-Solving Skills looks into the extent to which students perceive their ability to apply knowledge to solve practical problems. Effective problem-solving skills are a critical aspect of students’ evaluations, as they reflect the practical application of theoretical knowledge. The Critical Thinking Skills aspect evaluates students’ ability to analyze, evaluate, and synthesize information. Critical thinking is essential for students to navigate complex issues and make informed decisions. The Leadership Skills element measures how well the course fosters leadership abilities among students, including skills such as team management, decision-making, and motivational strategies. Whereas, the component of Effective Communication assesses the development of students’ ability to convey ideas clearly and effectively in both written and oral forms. Effective communication is crucial for academic and professional success.
The dependent variable of this study is a single aspect and it is; quality teaching attributes. In this study, quality teaching attributes included aspects such as teacher’s; ability to communicate high expectations, learner Creativity and Innovativeness, being highly practical, emphasizing project learning and use of technology. Ability to Communicate High Expectations: This attribute evaluates the teacher’s capability to set and communicate high academic standards and expectations for students, motivating them to achieve their best. Measuring Learner Creativity and Innovativeness focuses on assessing the teacher’s ability to foster an environment that encourages creative thinking and innovation among students. Assessing the aspect of Highly Practical Teaching Methods evaluates the teacher’s use of practical teaching methods that enhance hands-on learning experiences. Emphasis on Project-Based Learning evaluates the extent to which teachers incorporate project-based learning, which helps students apply theoretical knowledge to real-world scenarios. Whereas, Use of Technology measures the integration of technology in teaching, which can enhance learning experiences and make education more engaging and effective.
Scope of the Study
Geographically the study focused on Mountains of the Moon University. This is because; as a community-chartered and later Public University, it was found to be richly endowed with plenty of quality assurance initiatives. This is further evidenced with its status as one of the highly rated Universities by webometric rankings in Uganda. Content wise, focus was on user relevance of students’ evaluation of course learning outcomes for quality assurance in teaching at Mountains of the Moon University. Respondents included; students’ leaders, students in both second and third year of study; from the various academic disciplines (both female and male). Students and student leaders were selected as key respondents, since they are the core beneficiaries of the teaching and learning process in universities. In terms of time, the study covered the period from January, 2020 to September, 2021.
Significance of the study
This article is significant to academics, students, policy makers and community in the following ways; first, it provides potential literature to inform initiatives of designing inclusive theoretical framework(s) for developing appropriate University quality assurance mechanisms clearly reflecting a purpose driven teaching and learning process in Higher Education. Furthermore, the findings of the study provide valuable input into the discourse around the design of proper quality assurance institutional capacity indicators specifically in the aspect of course learning outcomes. The article further provides a foundation to building candid evaluation of course learning outcomes’ guidelines by; the national, regional regulatory bodies such as the National Council for Higher Education (NCHE) in Uganda, the East African Higher Education Space (EAHES), the Inter University Council of East Africa (IUCEA) in East Africa as a region. Finally, the findings of the study tickle scholars to conduct further research, focusing on general evaluation dimensions of quality of teaching in Higher Education (HE) and on quality assurance best practices in particular.
REVIEW OF LITERATURE
Quality Assurance (QA) in Higher Education
The emergence of Quality Assurance (QA) has been the most significant change driver in higher education over the past decades. Numerous QA agencies have been established or expanded, leveraging evaluation and accreditation tools to regulate and define quality-based objectives and criteria (Normand, 2016). These agencies are expected to ensure university compliance and enhance student learning outcomes, creating a tension between “accountability” and “improvement” that has been widely discussed (Banta & Palomba, 2015). External QA, focusing on institutions and programs, emphasizes compliance and quality enhancements (Smidt, 2015). Internal QA, defined by Geven and Maricut (2015), involves evaluations conducted within universities, while external QA involves evaluations by government or other external actors.
Students’ Evaluation of Teaching (SET)
Student Evaluation of Teaching (SET) is a key tool in assessing teaching quality. One primary focus of SET is the student evaluation of the teacher (Gregory, 2018). Interpersonal characteristics of teachers can significantly influence student engagement and learning (Hu et al., 2015). However, SET is prone to various biases, including those based on the teacher’s physical appearance (Gregory, 2018). Research indicates that SET ratings are influenced by factors such as the instructor’s personality and gender, with studies showing biases against female faculty (Boring et al., 2016). Effective SETs are essential for higher education institutions to collect meaningful data on teaching performance (Gregory, 2018). Given the complexity and various antecedents of SETs, no single tool can perfectly measure classroom activities, highlighting the need for fair and objective evaluations and useful feedback (Pradeep et al., 2019).
Quality Teaching
The global expansion of higher education has led to ambitious educational goals requiring new approaches to curriculum, instruction, and learning (Kehm & Stansaker, 2009). In East Asia, HEIs are striving for higher quality and better rankings (Mok & Cheung, 2011). In India, the growth of engineering and technical education has driven the expansion of higher education (Pradeep et al., 2019). Economic, political, and social changes have transformed higher education, necessitating greater accountability and transparency in teaching quality (Costes et al., 2010). Reduced public funding has forced universities to become more autonomous and accountable to society (Amanda, 2017).
Internal and External Quality Assurance
In Japan, “internal quality assurance” was first referenced in higher education in a 2008 government proposal aligned with the 2005 European Standards and Guidelines (Noda et al., 2018). HEIs are now key players in the international “knowledge economy” (OECD, 2015). Transparency and accountability are central to QA, which aims to foster continuous improvement while ensuring accountability.
Relevance of Students’ Evaluation of Course Learning Outcomes
Quantifiable performance indicators are crucial for assessing university quality. These indicators explicitly describe evidence against which quality is measured (Kettunen, 2010). At Mountains of the Moon University, learning outcomes reflect affective, cognitive, and psychomotor domains, encompassing problem-solving skills, critical thinking, leadership, and effective communication (MMU Quality Assurance Policy, 2018). According to Bloom’s Taxonomy, created to promote higher-order thinking, learning involves knowledge and the development of intellectual skills (Adesoji, 2018). The hierarchy of cognitive domain behaviors ranges from simple (knowledge) to complex (evaluation), with mastery of lower levels necessary before progressing to higher levels. Bloom’s Taxonomy assists teachers
in designing performance tasks, crafting questions, and providing feedback to promote higher-order thinking (Adesoji, 2018).
The Role of Bloom’s Taxonomy in Evaluating Learning Outcomes
Bloom’s Taxonomy, developed under Dr. Benjamin Bloom’s leadership, categorizes intellectual skills from basic knowledge to higher-order thinking such as analysis and evaluation. It emphasizes the progression from simple to complex cognitive tasks. The taxonomy aids educators in focusing on higher-order thinking and structuring their teaching to promote deeper learning beyond mere rote memorization (Adesoji, 2018). According to the taxonomy:
- Knowledge: Involves recalling data or information, using verbs like define, describe, identify, list, and state.
- Comprehension: Entails understanding and interpreting information, using verbs like explain, summarize, and translate.
- Application: Applies knowledge to new situations, using verbs like apply, demonstrate, and solve.
- Analysis: Involves breaking down information into parts and understanding its structure, using verbs like analyze, compare, and differentiate.
- Synthesis: Combines elements to form a new structure, using verbs like create, design, and organize.
- Evaluation: Judges the value of information for a purpose, using verbs like evaluate, judge, and recommend.
Implications for Higher Education
Higher education institutions operate increasingly like businesses, focusing on marketing, output, and performance metrics. The expansion of higher education globally has led to more ambitious educational goals, requiring innovative approaches to curriculum and instruction (Kehm & Stansaker, 2009). This trend is evident in East Asia and India, where competition and rankings drive the pursuit of quality (Mok & Cheung, 2011; Pradeep et al., 2019). Universities must be transparent and accountable, regularly assessing and improving teaching quality through QA processes (Costes et al., 2010).
Challenges and Considerations in Teaching Quality Evaluation
Despite the emphasis on quality teaching, there is limited research on the potential negative consequences of innovative and challenging teaching strategies. Lecturers who expose students to new ideas and encourage intellectual risk-taking may face lower evaluations due to student discomfort (Kelty & Bunten, 2017). However, innovative teaching is crucial for higher education, promoting critical thinking and problem-solving skills essential for student success and employability (Pradeep et al., 2019).
Summary
The literature on QA and SET highlights the complexities and challenges of measuring teaching quality. QA processes must balance accountability and improvement, while SET tools need to address biases and accurately reflect teaching effectiveness. Bloom’s Taxonomy provides a framework for evaluating learning outcomes, emphasizing higher-order thinking. As higher education evolves, institutions must continually adapt their QA and evaluation methods to ensure they meet the needs of students and society, fostering environments that promote quality teaching and learning.
RESEARCH METHODOLOGY
This section presents the methodology that guided this study. This included; Research design, study population, accessible population, sampling strategies, sample size, sampling procedure, data collection techniques, data collection instruments, data quality control (validity and reliability of instruments), methods of data analysis and procedure and ethical issues.
Oniye, (2017) defines a research design as the scheme, outline or plan that is used to generate answers to the research problems. Given that this study undertook a mixed method approach (both quantitative and qualitative), it was to a greater extent quantitative; to statistically generate the frequencies and percentages of the responses (Creswell, 2014). The reason for including a qualitative aspect was for purposes of triangulation. The study used a descriptive research design, which according to Mugenda & Mugenda (2007) refers to a design that supports the researcher to adopt the unit of analysis in a more accurate way. Therefore, the descriptive research design enabled the researcher to describe the state of affairs as they actually exist. Further, a case study design was also used to partly inform the qualitative dimension of this study.
A case study design was considered on the basis that it entailed studying phenomena incisively and cheaply in a short time (Creswell, 2012). As for Yin (2012), case studies are a design of inquiry found in many fields, especially evaluation, in which the researcher develops an in-depth analysis of a case, often a program, event, activity, process, or more individuals. The above designs and approaches were considered purposely to seek students’ views and opinions incisively, which were accordingly described, to assess the relevance of Students’ evaluation of quality in the teaching process at Mountains of the Moon University.
According to Yin (2012), population is the aggregate of units about which the study findings are to be generalized.
The study population was categorized in terms of years; second year, third year and post graduate students across all disciplines. These categories were considered given their long stay in the University and hence their developed knowledge and experience in processes of students’ evaluation of teaching at Mountains of the moon University. A target and accessible population is also illustrated in table 1. Below.
S/N | Population Category | Target Population | Accessible Population | |||||
1. | Students’ leaders (students’ leaders at guild, School/ Faculty levels). | Males | Females | Total | Males | Females | Total | |
2. | Second year students (across all disciplines) | 400 | 300 | 700 | 150 | 98 | 248 | |
3. | Third year students/finalists (across all disciplines) | 226 | 140 | 366 | 100 | 86 | 186 | |
4. | Post Graduate students (post graduate students across all disciplines) | 59 | 35 | 94 | 40 | 33 | 73 | |
Overall totals | 690 | 480 | 1,170 | 297 | 220 | 517 |
Source: Students’ Enrolment Report 2019/2020 as at March 13, 2020, Office of the Academic Registrar Mountains of the Moon University.
Sample size determination and Selection
Zodpey (2004) defined sampling as the process of selecting a few (a sample) from a bigger group to become the basis for estimating or predicting the prevalence of unknown piece of information, outcome and situation regarding the bigger group. This definition is in agreement with Gibbs’ (2007) definition of the same.
In this study, purposive sampling was used in the sampling process. Purposive sampling involved selecting specific individuals from the population (Creswell, 2014). This specificity in selection was drawn towards the sample’s possession of sectors with specific and unique knowledge in regard to the study (Robert, 2011). This particular approach was used to select 73 post graduate students (40 males and 33 females), 10 students’ leaders at both faculty/school and guild levels (7 males and 3 females), 248 second year students (150 males and 98 females), 186 third year students (100 males and 86 female). The categories above were purposefully considered, given their reasonably long stay in the University and their familiar experience with the practice and use of students’ evaluation of the quality of teaching and learning tools at Mountains of the Moon University. This study further used the Krejcie and Morgan (1970) table of sampling. (See table appended; for clarity of the sample selection), to determine the sample size.
Both primary and secondary data sources were utilized, thus Primary data collection tools and secondary data collection tools were used to collect data as explained bellow:
The primary data collection methods involved use of the following methods:
Self-Administered Questionnaires (SAQ)
This involved distributing questionnaires to respondents and after completing the questionnaires, the researcher picked them and entered them in to the Statistical Packages for Social Science Research software for analysis (descriptive analysis). The SAQ was preferred since this enabled the researcher to solicit large responses in a reasonably short time from literate respondents, as also argued by (Creswell, 2014). This was largely the main instrument; hence the study was largely quantitative.
Secondary Data Collection Methods / Documentary Review
For Secondary data the researcher reviewed updated policy documents such as the endorsed Institutional Policies related to the study. As stressed by Creswell, (2014); documentary review was used to critically examine recorded information mainly information from documents (both soft and hard copies) related to the study topic under investigation with a sole aim of collecting data for further analysis to make inferences. Specifically, the research reviewed the Mountains of the Moon University (MMU) Charter (2018), Students’ enrollment Report (2019/20), Quality Assurance Tools and Reports for evaluation of teaching and learning, and the reviewed (operational Plan for 2017/2018).
Data Collection Instruments / tools
The researcher employed an online data collection instrument to collect primary data. An online Self-Administered Questionnaire using Kobo toolbox was used. Amin (2005) defined a questionnaire as a self-report instrument used to gather information about the research problem under investigation based on the objectives of the study. Self-administered online close-ended questions were used because the target population was literate and so the respondents were students with emails, and they could read, understand the questions and respond accordingly. However, it should be noted that questionnaires have the following disadvantages; first and foremost is that, they tend to be tiresome to fill by respondents. Another disadvantage is that questionnaires require literate respondents to answer (Creswell, 2014).
NB: Online questionnaires were considered because this study was conducted during the period of the global pandemic of COVID 19 lockdown.
This refers to quality of data to be managed through ensuring validity and reliability of the research instruments (Creswell, 2014).
Validity of Data Collection Instruments
Validity refers to appropriateness of the instrument, or the ability of the instrument to measure what it was intended to measure (Creswell, 2012). The validity of measurement instrument can take several different forms, such as face validity content validity, criterion validity and construct validity each of which is important in different situations (Paul & Jeanne, 2013). Before the study was conducted, the researcher vetted the research instruments through proof reading and editing with the supervisor for analysis to ensure their validity to meet the purpose of the study. The questionnaire was developed basing on the problem statement, purpose and objectives of the study. The content validity index technique was used to obtain the validity of instruments. The researcher ensured content validity of the instruments by ensuring that all questions or items conformed to the study conceptual framework. The content validity index of the questionnaire items was computed using the formula:
CVI = Number of items rated as relevant
Total number of items in the questionnaire
Whereby, if the CVI was equal to 0.7 or above, the questionnaire would be considered valid (Amin, 2005).
The table 2 below illustrates the correlation coefficient of the study and its descriptors
Table 3. 2 : Correlation Coefficient
Correlation coefficients | Descriptors |
0.70 or higher | Very strong association |
0.50 – 0.69 | Substantial association |
0.30 – 0.49 | Moderate association |
0.10 – 0.29 | Low association |
0.01-0.09 | Negligible association |
Source: Amin (2005)
On the other hand, reliability referred to the ability of the instrument(s) to obtain similar results at different times. The consistence of an instrument to measure what it is intended to measure (Paul & Jeanne, 2013). The questionnaire was further pre-tested on 10 respondents of various categories randomly selected before the final survey for the following reasons; to find out the relevance of the questions to the problem; to find out whether the grammar used in the questionnaire could be understood by the respondents; and to find out whether the problem in question bears any meaning to the target population. A pre-study visit was also made to Mountains of the Moon University to establish rapport before the actual study. Adjustments and amendments in the questionnaire were accordingly made after the preliminary process. Reliability of the instruments was established by using a Cronbach Alpha Method by SPSS (Revilla &Krasnick, 2014).
The researchers secured a letter of introduction from UTAMU. This was presented to the respective participants, seeking their consent and cooperation in the study exercise. The researcher distributed the questionnaires and eventually collected them after a period of one week. The researcher also arranged for Focus Group Discussions with some respondents at the time of their convenience. Questionnaires were collected shortly after being filled in, to avoid misplacement or loss. The researcher also wrote down relevant information during the Focus group discussions.
This refers to analyzing data collected from the field, using manual or computer-assisted techniques or both, such as during editing, coding and presentation of responses (Creswell, 2014).
Quantitative Data Analysis
The study findings from the respondents as recorded in questionnaires were edited, coded, summarized, in accordance with the research (Creswell, 2014). Each objective and its findings were analyzed to show percentages of acceptance on each objective in the questionnaire in form of strongly agree, agree, not sure, disagree, and strongly disagree. Statistical Package for the Social Sciences (SPSS) was used to analyze quantitative data; this involved the use of frequencies, percentages and means while Pearson correlation co-efficient and regression analysis was used to test the relationship between the independent variable and the dependent variable and to find out which factor had more significance than the other, as also suggested by Creswell (2014). These findings were then interpreted to derive meanings, inferences and relationships between the variables out of the presentations.
Punch (2005) argues that, in addition to conceptualizing the writing process for a proposal, researchers need to anticipate the ethical issues that may arise during their studies. Therefore Israel and Hay (2006)’s advise that researchers need to protect their research participants; develop a trust with them; promote the integrity of research; guard against misconduct and impropriety that might reflect on their organizations or institutions; and cope with new, challenging problems was adhered to. In this study, the researcher appreciated that respondents have the right to keep from the public some information about themselves as a means of ensuring privacy. Informed consent was ensured by the researcher through seeking permission from the respondent(s) to participate in this study.
Anonymity was ensured by the researcher making sure that all respondents’ identities remained anonymous and not salient in the study. In the same vain, whatever that was said in confidence remained confidential. Sensitivity to context was observed by ensuring that, as each organization operates in a particular and changing political and socioeconomic setting, external conditions were taken into account in designing and carrying out the research process. Integrity and transparency was observed by ensuring fairness and acceptance during the process.
PRESENTATION AND DISCUSSION OF FINDINGS
This chapter presents the findings/results of the study. The findings presented emerge from both qualitative and quantitative data. The quantitative findings appear in statistical tabulations. The findings in statistical form are presented in frequency distribution tables that present responses in numeric frequencies and percentage values that can be compared to differences in magnitude of the response to the study variable. This helps to give a vivid numeric and clear interpretation of the data. The rest of the findings appear in narrative form.
To Examine the relevance of students’ evaluation of course learning outcomes in ensuring quality in teaching and Learning Process.
To unpack the learning outcomes as an aspect of the independent variable (IV) in this study, seven items were used. The table below presents the total number of respondents, their mean responses, standard deviations, and interpretive scales.
Table 3: Items of course learning outcomes for a quality teaching and learning process
No | Items | N | Mean | Std. Dev | Interp. Scale |
1 | Quality teaching would be reflected, if the course provides students with a deeper understanding of the concepts and subject matter | 182 | 4.27 | .842 | A |
2 | Quality teaching would be reflected, if the course projects, assignments, tests, and/or exams provide students opportunity to demonstrate an understanding of the course materials. | 181 | 4.22 | .841 | A |
3 | Quality teaching would be reflected, if the course learning outcomes are met | 180 | 4.29 | .815 | A |
4 | Quality teaching would be reflected, if the course field experience and/or clinical component improved students’ understanding of the course material. | 180 | 4.26 | .758 | A |
5 | Quality teaching would be reflected, if the course provided students with the opportunity to draw from scholarly research. | 182 | 4.09 | .816 | A |
6 | Quality teaching would be reflected, if adequate support (e.g. educational technology and library resources) were available and accessible to enhance students’ learning | 181 | 4.33 | .810 | SA |
7 | Quality teaching would be reflected, if the course provided opportunity for students to critically reflect on practice or on important issues in the subject matter. | 180 | 4.26 | .778 | A |
Overall mean and Standard Deviation | 4.25 | 0.81 | A |
Source: Primary data, 2020
Eleven questions were set under objective number two and these were as presented in the table above. Before analyzing the data, the reliability test using Cronbach alpha test was performed to verify the internal relatedness of the eleven questions set under the objective two and it was found out to be.765 a CVI above 0.7 hence, considered valid (Amin, 2005), which show a good internal relatedness among those questions. The analysis was done and it was found out that all the respondents were in agreement with all the views presented to them with the overall Mean (µ) of 4.25 and standard Deviation (σ) of 0.81. To ensure the reliability of the data, a Cronbach alpha test was performed, yielding a value of 0.765, indicating good internal consistency (Amin, 2005). This reliability confirms that the instrument used is valid for assessing the relevance of students’ evaluation of course learning outcomes in ensuring quality in the teaching and learning process.
Table 4: responses on assessment of the relevance of students’ evaluation of course learning outcomes
No | Question theme | SD | D | NS | A | SA | Interp Scale | |||||||
F | % | F | % | F | % | F | % | F | % | Mean | Std. Dev | |||
1 | Course provides deeper understanding of concepts | 1 | 0.5 | 12 | 6.6 | 5 | 2.7 | 85 | 46.4 | 79 | 43.2 | 4.26 | .844 | A |
2 | Course projects provide opportunity to demonstrate understanding | 1 | 0.5 | 13 | 7.1 | 6 | 3.3 | 89 | 48.6 | 73 | 39.9 | 4.21 | .854 | A |
3 | Course learning outcomes are met | 3 | 1.6 | 6 | 3.3 | 5 | 2.7 | 88 | 48.1 | 79 | 43.2 | 4.29 | .815 | SA |
4 | Course field experience improved understanding. | 1 | 0.5 | 5 | 2.7 | 13 | 7.1 | 88 | 48.1 | 73 | 39.9 | 4.26 | .758 | A |
5 | Course provided opportunity to draw from scholarly research | 1 | 0.5 | 9 | 4.9 | 21 | 11.5 | 95 | 51.9 | 57 | 31.1 | 4.08 | .818 | A |
6 | Adequate support available & accessible | 2 | 1.1 | 7 | 3.8 | 6 | 3.3 | 79 | 43.2 | 89 | 48.6 | 4.34 | .810 | SA |
7 | Course provided opportunity for critical reflection. | 1 | 0.5 | 7 | 3.8 | 10 | 5.5 | 92 | 50.3 | 73 | 39.9 | 4.25 | .772 | A |
Source: Primary data, 2020
Owing to the above attribute of learning outcomes as an aspect of quality in the teaching and Learning Process, the first question/item focused on whether this is reflected in assessing whether the developed course provides students with a deeper understanding of the concepts and subject matter as an attribute of quality teaching. The following results were obtained from the 183 respondents; 1 respondent constituting a percentage response rate of 0.5 strongly disagreed with the above attribute, 12 respondents constituting a percentage response rate of 6.6 disagreed, 5 respondents constituting a percentage response rate of 2.7 were not sure, while 85 respondents constituting a percentage response rate of 46.4 agreed and 79 respondents constituting a percentage response rate of 43.2 strongly agreed. This attribute obtained a mean score of 4.26 and a standard deviation of 0.844, which by interpretation meant that majority of the respondents agreed that the item is relevant in evaluating course learning outcomes as an aspect of quality in the teaching and Learning Process.
Regarding the attribute of learning outcomes as an aspect of quality in the teaching and Learning Process, the second question/item focused on whether this is reflected in assessing whether the course projects, assignments, tests, and/or exams provide students with the opportunity to demonstrate an understanding of the course material. The following results were obtained from the 183 respondents; 1 respondent constituting a percentage response rate of 0.5 strongly disagreed with the above attribute, 13 respondents constituting a percentage response rate of 7.1 disagreed, 6 respondents constituting a percentage response rate of 3.3 were not sure, while 89 respondents constituting a percentage response rate of 48.6 agreed and 73 respondents constituting a percentage response rate of 39.9 strongly agreed. This attribute obtained a mean score of 4.21 and a standard deviation of 0.854, which by interpretation meant that majority of the respondents agreed that the item is relevant in evaluating course learning outcomes as an aspect of quality in the teaching and Learning Process.
More still, the attribute of learning outcomes as an aspect of quality in the teaching and Learning Process, the third question/item focused on whether this is reflected in assessing whether the course learning outcomes are met. The following results were obtained from the 183 respondents; 3 respondents constituting a percentage response rate of 1.6 strongly disagreed with the above attribute, 6 respondents constituting a percentage response rate of 3.3 disagreed, 5 respondents constituting a percentage response rate of 2.7 were not sure, while 88 respondents constituting a percentage response rate of 48.1 agreed and 79 respondents constituting a percentage response rate of 43.2 strongly agreed. This attribute obtained a mean score of 4.29 and a standard deviation of 0.815, which by interpretation meant that majority of the respondents strongly agreed that the item is relevant in evaluating course learning outcomes as an aspect of quality in the teaching and Learning Process.
Furthermore, regarding the attribute of learning outcomes as an aspect of quality in the teaching and Learning Process, the fourth question/item focused on whether this is reflected in assessing whether the course field experience and/or clinical component improved students’ understanding of the course material. The following results were obtained from the 183 respondents; 1 respondent constituting a percentage response rate of 0.5 strongly disagreed with the above attribute, 5 respondents constituting a percentage response rate of 2.7 disagreed, 13 respondents constituting a percentage response rate of 7.1 were not sure, while 88 respondents constituting a percentage response rate of 48.1 agreed and 73 respondents constituting a percentage response rate of 39.9 strongly agreed. This attribute obtained a mean score of 4.26 and a standard deviation of 0.758, which by interpretation meant that majority of the respondents agreed that the item is relevant in evaluating course learning outcomes as an aspect of quality in the teaching and Learning Process.
The attribute of learning outcomes as an aspect of quality in the teaching and Learning Process, was further probed through the fifth question/item focused on whether this is reflected in assessing whether the course provided students with the opportunity to draw from scholarly research. The following results were obtained from the 183 respondents; 1 respondent constituting a percentage response rate of 0.5 strongly disagreed with the above attribute, 9 respondents constituting a percentage response rate of 4.9 disagreed, 21 respondents constituting a percentage response rate of 11.5 were not sure, while 95 respondents constituting a percentage response rate of 51.9 agreed and 57 respondents constituting a percentage response rate of 31.1 strongly agreed. This attribute obtained a mean score of 4.08 and a standard deviation of 0.818, which by interpretation meant that majority of the respondents agreed that the item is relevant in evaluating course learning outcomes as an aspect of quality in the teaching and Learning Process.
Learning outcomes as an aspect of quality in the teaching and Learning Process, was also investigated through the sixth question/item focused on whether this is reflected in assessing whether adequate support (e.g. educational technology and library resources) were available and accessible to enhance students’ learning. The following results were obtained from the 183 respondents; 2 respondents constituting a percentage response rate of 1.1 strongly disagreed with the above attribute, 7 respondents constituting a percentage response rate of 3.8 disagreed, 6 respondents constituting a percentage response rate of 3.3 were not sure, while 79 respondents constituting a percentage response rate of 43.2 agreed and 89 respondents constituting a percentage response rate of 48.6 strongly agreed. This attribute obtained a mean score of 4.34 and a standard deviation of 0.810, which by interpretation meant that majority of the respondents strongly agreed that the item is relevant in evaluating course learning outcomes as an aspect of quality in the teaching and Learning Process.
Learning outcomes as an aspect of quality in the teaching and Learning Process, was also investigated through the seventh question/item focused on whether this is reflected in assessing whether the course provided opportunity for students to critically reflect on practice or on important issues in the subject matter. The following results were obtained from 183 respondents; 1 respondent constituting a percentage response rate of 0.5 strongly disagreed with the above attribute, 7 respondents constituting a percentage response rate of 3.8 disagreed, 10 respondents constituting a percentage response rate of 5.5 were not sure, while 92 respondents constituting a percentage response rate of 50.3 agreed and 73 respondents constituting a percentage response rate of 39.9 strongly agreed. This attribute obtained a mean score of 4.25 and a standard deviation of 0.772, which by interpretation meant that majority of the respondents agreed that the item is relevant in evaluating course learning outcomes as an aspect of quality in the teaching and Learning Process.
DISCUSSION CONCLUSION AND RECOMMENDATIONS
Students’ evaluation of course learning outcomes as an aspect of quality teaching.
The study’s findings highlight the critical role various attributes of course learning outcomes play in ensuring quality teaching and learning processes. Each item analyzed received substantial agreement from respondents, demonstrating their perceived relevance and importance in educational settings.
Deeper Understanding of Concepts: According to the data, 89.6% of respondents agree that quality teaching is reflected when courses provide students with a deeper understanding of concepts and subject matter. This suggests that students highly value courses that enhance their conceptual understanding, indicating that deeper learning is a crucial component of perceived quality education. This aligns with the need for robust theoretical foundations within curricula (Denson, Loveday, & Dalton, 2010).
Opportunity to Demonstrate Understanding: Similarly, 88.5% of respondents believe that projects, assignments, and exams allowing students to demonstrate their understanding of course materials are essential for quality teaching. This underscores the necessity for assessments that are integrative rather than merely evaluative, reflecting real-world applications to gauge students’ comprehensive understanding (Yossi, Baruch, & Gali, 2020).
Meeting Course Learning Outcomes: A significant majority of respondents (91.3%) agree that meeting course learning outcomes is a vital indicator of quality teaching. This high level of agreement highlights the importance of clearly defined and achievable learning outcomes in educational quality assurance. Such outcomes should be central to curriculum development efforts to ensure they align with students’ educational needs and professional aspirations (Zhao & Gallant, 2012).
Improved Understanding through Field Experience: Practical components such as field experiences or clinical components are viewed positively, with 88.0% agreement among respondents. This finding underscores the value of experiential learning in bridging theoretical knowledge and practical application, thereby significantly enhancing students’ readiness for professional practice (Pedro & Isabel, 2020).
Drawing from Scholarly Research: Although this attribute had slightly lower agreement at 83.0%, it still indicates that students see value in integrating scholarly research into their coursework. This suggests that engaging with current research enhances the educational experience by providing contemporary and relevant knowledge, fostering a culture of inquiry and up-to-date knowledge acquisition (Frank & Meyer, 2020).
Adequate Support (Educational Technology and Library Resources): The highest agreement was observed in the importance of accessible and adequate educational resources, with 91.8% of respondents indicating their necessity for quality learning. This highlights the critical role of institutional support in providing necessary learning tools and resources, emphasizing the need for investment in educational technologies and library resources (Denson, Loveday, & Dalton, 2010).
Critical Reflection Opportunities: Finally, 90.2% of respondents agree that courses offering opportunities for critical reflection on practice or significant issues in the subject matter contribute to quality teaching. This suggests that reflective practices are highly valued for deeper engagement and understanding of course content, enhancing students’ analytical and critical thinking skills (Yossi, Baruch, & Gali, 2020).
Implications of the Findings
Curriculum Development: The emphasis on deeper understanding and demonstration of knowledge suggests that curricula should integrate robust theoretical foundations with practical applications. This holistic approach to curriculum design can enhance both conceptual understanding and practical skills.
Assessment Methods: Evaluations should be diverse and reflective of real-world applications. Incorporating various forms of assessments, such as projects, assignments, and practical tests, can significantly enhance the learning experience by allowing students to demonstrate their comprehensive understanding (Yossi, Baruch, & Gali, 2020).
Experiential Learning: Integrating field experiences or clinical components is crucial, as these practical engagements significantly improve students’ grasp of course material and readiness for professional practice. This finding underscores the need for experiential learning opportunities within educational programs (Pedro & Isabel, 2020).
Research Integration: Courses should include elements that encourage students to engage with and draw from scholarly research. This approach can foster a culture of inquiry and ensure students acquire contemporary and relevant knowledge (Frank & Meyer, 2020).
Institutional Support: Ensuring the availability of adequate educational technologies and library resources is essential. Institutions should invest in these areas to support student learning effectively, reflecting the highest agreement observed in the study (Denson, Loveday, & Dalton, 2010).
Critical Reflection: Opportunities for students to critically reflect on their learning and practice should be embedded within the curriculum. This can enhance students’ analytical and critical thinking skills, contributing to a deeper engagement with course content (Yossi, Baruch, & Gali, 2020).
Students’ Evaluation of Course Learning Outcomes
The student evaluation of teaching (SET) survey is frequently used to measure student satisfaction, collecting information about the course and the teacher’s effectiveness (Denson, Loveday, & Dalton, 2010). Many studies have explored the reliability and validity of SETs, confirming their role as a tool for quality assurance in higher education institutions (Zhao & Gallant, 2012). SETs contain several criteria that assess different aspects of teaching and learning, and their widespread use reflects a global trend in the rationalization of teaching within universities (Pedro & Isabel, 2020; Frank & Meyer, 2020).
The implications of these findings suggest that educational institutions aiming to enhance the quality of their teaching and learning processes should consider incorporating these attributes into their curriculum development, assessment methods, and institutional support strategies. By doing so, they can align more closely with students’ expectations and educational standards, ensuring a higher quality educational experience.
CONCLUSIONS
The following conclusions build on the previous discussion and provide insights related to students’ evaluation of teaching quality in higher education. These insights, derived from each research question, form a basis for enhancing the practice of using student evaluations of teaching (SETs) as key tools for quality assurance in teaching and learning within higher education institutions. The insights are organized according to themes corresponding to the research questions.
Students’ Evaluation of Course Learning Outcomes as an Aspect of Quality Teaching
Insight 1: Focus on Learning Outcomes To ensure teaching and learning are result-oriented and of high quality, they should be grounded in and focused on appropriate learning outcomes. Student evaluations of such teaching should emphasize these outcomes, which prioritize high-level results. This alignment suggests that students and academics hold similar conceptions of good teaching. The evident consistency between students’ and academics’ views provides an opportunity to revitalize student evaluations of teaching, ensuring they reflect shared expectations and standards.
Insight 2: Importance of Quantifiable Performance Indicators Quantifiable performance indicators are crucial when developing question items for a higher education institution’s SET tool. These indicators provide explicit descriptions of evidence against which the quality of teaching and learning outcomes can be measured. Referring to the Mountains of the Moon University Quality Assurance Policy (reviewed 2018), learning outcomes encompass the three learning domains: affective, cognitive, and psychomotor. These domains are reflected in attributes such as problem-solving skills, critical thinking, leadership, and effective communication skills. The success of any student evaluation of teaching and learning outcomes supports this proposition.
Recommendations
- Regular Review of SET Tools Mountains of the Moon University (MMU) should review its SET tool every two years to incorporate emerging attributes of good teaching from the perspectives of both students and academics. Additionally, the wording or phrasing of question items in the SET tool should align with the conceptions and themes of good teaching as revealed by students’ responses in this study.
- Benchmarking by Regional Regulatory Bodies Regional regulatory bodies for higher education standards, such as the National Council for Higher Education (NCHE) in Uganda, the East African Higher Education Space (EAHES), and the Inter-University Council of East Africa (IUCEA), should use the findings of this study as a basis for developing robust evaluation guidelines. Benchmarking against these findings can help build comprehensive and candid evaluation frameworks.
Future Research Directions
Future research should aim to explain the small variances in student ratings on SETs and their tendency to rate criteria they find more important higher than others. This research could employ qualitative approaches, such as focus groups and interviews, to gain deeper insights. Scholars should also focus on general evaluation dimensions of teaching quality in higher education and best practices in quality assurance. This can lead to the development of more effective and meaningful evaluation tools that truly reflect the quality of teaching and learning.
By incorporating these insights and recommendations, educational institutions can enhance the effectiveness of student evaluations of teaching, ensuring they serve as reliable and constructive tools for quality assurance in higher education
REFERENCES
- Adesoji, F. A. (2018). Bloom Taxonomy of Educational Objectives and the Modification of Cognitive Levels. Advances in Social Sciences Research Journal, 5(5).
- Adom, D., & Hussein, E. K. (2018). Theoretical and conceptual framework: mandatory ingredients theoretical and conceptual framework: Mandatory ingredients Engineering 7, (1), pp.2277 – 8179
- Ahmed Al Kuwaiti & Arun V. S. (2015) Appraisal of students experience survey (SES) as a measure to manage the quality of higher education in the Kingdom of Saudi Arabia: an institutional study using six sigma model, Educational Studies, 41(4) 430-443, DOI: 10.1080/03055698.2015.1043977.
- Amanda F., (2017). “Contextualising Excellence in Higher Education Teaching: Understanding the Policy Landscape. Teaching Excellence in Higher Education, pp 5-38.
- Amin, M. E. (2005). Social Science Research; Conception, Methodology and Analysis; Kampala, Uganda: Makerere University.
- Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice Hall.
- Bandura, A. (1981). Self-referent thought: A developmental analysis of self-efficacy. In J. Flavell and L. Ross (Eds). Social Cognitive Development: Frontiers andPossible Futures (pp. 200-239). Cambridge, England: Cambridge University Press.
- Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
- Bandura, A. (1989). Human agency in social cognitive theory. American Psychologist, 44, 1175-1184.
- Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.
- Banta, T.W. and Palomba, C.A. (2015), Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education, Jossey-Bass, San Francisco, CA.
- Baumert, J., Kunter, M., Blum, W., Brunner, M., Voss, T., Jordan, A., & Tsai, Y.-M. (2010). Teachers’ mathematical knowledge, cognitive activation in the classroom, and student progress. American Educational Research Journal, 47, 133 180.
- Behari-Leak, K. (2017). New academics, new higher education contexts: A critical perspective on professional development. Teaching in Higher Education. doi:10.1080/13562517.2016.273215.
- Benton, S.L., & Suzanne Y. (2018). “Best Practices in the Evaluation of Teaching.” IDEA Paper 69.
- BIS. (2016). White Paper: Higher education: Teaching excellence, social mobility and student choice. Retrieved from https://www.gov.uk/government/consultations/higher-education-teaching-excellence-social-mobility-and-student-choice.
- Blackmore, P., Blackwell, R., & Edmondson, M. (2016). Tackling wicked issues: Prestige and employment out- comes in the teaching excellence framework. HEPI Occasional Paper 14
- Blair, E., and K. v. Noel. (2014). “improving Higher Education Practice through Student Evaluation Systems: is the Student voice Being Heard?” Assessment & Evaluation in Higher Education 39 (7): 879–894. doi:10.1080/02602938.2013.875984.
- Bobby, C. L. (2014) The abcs of building quality cultures for education in a global world. Paper presented at the International Conference on Quality Assurance Bangkok, Thailand.
- Boring, A., Ottoboni, K. and Stark, P.B. (2016), “Student evaluations of teaching (mostly) do not measure teaching effectiveness”, Science Open Research, available at: https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1.
- Brooks C. (2021) The quality conundrum in initial teacher education, Teachers and Teaching, 27(1-4): 131-146, DOI: 10.1080/13540602.2021.1933414.
- Chan, C. K. Y., L. Y. Y. Luk, & M. Zeng. (2014). “Teachers’ Perceptions of Student Evaluations of Teaching.” Educational Research and Evaluation 20 (4): 275–289. doi:10.1080/13803611.2014.932698.
- Chen, L. (2016), “Do student characteristics affect course evaluation completion?”, paper presented at the 2016 Annual Conference of the Association for Institutional Research, New Orleans, LA.
- Ching, G. (2018). A literature review on the student evaluation of teaching: An examination of the search, experience, and credence qualities of SET. Higher Education Evaluation and Development. https://doi.org/10.1108/HEED-01-2018-0005
- Costes, N., Hopbach, A., Kekäläinen, H., Ijperen, R.V. and Walsh, P. (2010), Quality Assurance and Transparency Tools, European Association for Quality Assurance in Higher Education, Helsinki.
- Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (4th ed.). Upper Saddle River, NJ: Merrill.
- Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches. London: SAGE Publications.
- Darwin, S. (2016). What contemporary work are student ratings actually doing in higher education? Studies in Educational Evaluation, 54, 13-21.
- Deem, R. (2015). A critical commentary on Ray Land and George Gordon ‘Teaching excellence initiatives: Modalities and operational factors. York: Higher Education Academy.
- Denson, N., Loveday, T., & Dalton, H. (2010). Student evaluation of courses: What predicts satisfaction? Higher Education Research & Development, 29(4), 339-356. https://doi.org/10.1080/07294360903394466
- Elassy, N. (2013). A model of student involvement in the quality assurance system at institutional level. Quality Assurance in Education, 21(2), 162-198.
- Evans, K. H., Thompson, A. C., O’Brien, C., Bryant, M., Basaviah, P., Prober, C., & Popart, R. A. (2016). An innovative blended preclinical curriculum in clinical epidemiology and biostatistics: Impact on student satisfaction and performance. Academic Medicine, 91(5), 696-700.
- Fairchild, E., & Crage, S. (2014). Beyond the debates: Measuring and specifying student consumerism. Sociological Spectrum, 34(5), 403-420. https://doi.org/10.1080/02732173.2014.937651
- Fisk, R. P., Grove, S. J., & John, J. (2014). Services marketing: An interactive approach (4th ed.). Mason, OH: Cengage Learning.
- Flodén, J. (2017). The impact of student feedback on teaching in higher education. Assessment & Evaluation in Higher Education, 42(7), 1054-1068. https://doi.org/10.1080/02602938.2016.1224997
- Frank, D. J., & Meyer, J. W. (2020). The university and the global knowledge society. Princeton: Princeton University Press.
- Frick, T. W., Chadha, R., Watson, C., & Zlatkovska, E. (2010). New measures for course evaluation in higher education and their relationships with student learning. School of Education, Indiana University Bloomington, Denver, CO. Retrieved from http://www.indiana.edu/~tedfrick/TALQ.pdf
- Geoff, T., & Tashmin, K. (2017). Student evaluation of teaching: Bringing principles into practice. Journal of Higher Education in Africa/Revue de l’enseignement supérieur en Afrique, 15(1), 89-104.
- Geven, K., & Maricut, A. (2015). A merry-go-around of evaluations moving from administrative burden to reflection on education and research in Romania. In A. Curaj, L. Matei, R. Pricopie, J. Salmi, & P. Scott (Eds.), The European higher education area: Between critical reflections and future policies: Part II (pp. 665-684). Springer.
- Giaber, J. M. (2018). An integrated approach to teaching translation practice: Teacher’s approach and students’ evaluation. The Interpreter and Translator Trainer, 12(3), 257-281. https://doi.org/10.1080/1750399X.2018.1502006
- Gibbs, G. R. (2007). Analyzing qualitative data. In U. Flick (Ed.), The Sage qualitative research kit. Thousand Oaks, CA: Sage.
- Gill, T. G. (2014). The complexity and the case method. Management Decision, 52(9), 1564-1590.
- Golding, C., & Adam, L. (2016). Evaluate to improve: Useful approaches to student evaluation. Assessment & Evaluation in Higher Education, 41(1), 1-14. https://doi.org/10.1080/02602938.2014.976810
- Greatbatch, D., & Holland, J. (2016). Teaching quality in higher education. London: Department for Business, Innovation and Skills.
- Hansson, F. (2010). Dialogue in or with the peer review? Evaluating research organizations in order to promote organizational learning. Science and Public Policy, 37(4), 239-251.
- Harfold, T. (2014). Big data: A big mistake? Significance, 11, 14-19.
- Harman, T., Bertrand, B., Greer, A., Pettus, A., Jennings, J., Wall-Bassett, E., & Babatunde, O. T. (2015). Case-based learning facilitates critical thinking in undergraduate nutrition education: Students describe the big picture. Journal of the Academy of Nutrition and Dietetics, 115(3), 378-388.
- Hilvano, N. T., Mathis, K. M., & Schauer, D. P. (2014). Collaborative learning utilizing case-based problems. Journal of College Biology Teaching: A Publication of the Association of College and University Biology Educators, 40(2), 22-30.
- Huber, E., & Harvey, M. (2016). An analysis of internally funded learning and teaching project evaluation in higher education. International Journal of Educational Management, 30(5).
- Iqbal, I. (2013). Academics’ resistance to summative peer review of teaching: Questionable rewards and the importance of student evaluations. Teaching in Higher Education, 18(5), 557-569.
- Israel, M., & Hay, I. (2006). Research ethics for social scientists: Between ethical conduct and regulatory compliance. Thousand Oaks, CA: Sage.
- Jarboe, N. (2016). Women count: Leaders in higher education 2016. Women Count. Retrieved from https://women-count.org/
- Jideani, V. A., & Jideani, I. A. (2012). Alignment of assessment objectives with instructional objectives using revised Bloom’s Taxonomy: The case for food science and technology education. Journal of Food Science Education, 11(3), 34-42.
- Kehm, B. M., & Stansaker, B. (2009). University rankings, diversity, and the new landscape of higher education. Rotterdam: Sense Publishers.
- Kelty, R., & Bunten, A. (2017). Risk-taking in higher education: The importance of negotiating intellectual challenge in the college classroom. Lanham, MD: Rowman & Littlefield Publishers.
- Kettunen, J. (2010). Cross-evaluation of degree programmes in higher education. Quality Assurance in Education, 18(1), 34-46.
- Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30(3), 607-610.
- Kuzmanovic, M., Savic, G., Gusavac, B. A., Makajic-Nikolic, D., & Panic, B. (2013). A joint-based approach to student evaluations of teaching performance. Expert Systems with Applications, 40(10), 4083-4089. https://doi.org/10.1016/j.eswa.2013.01.039
- Li, K. C., Ye, C. J., & Wong, B. T. M. (2018). Learning analytics in higher education institutions in Asia. In International Conference on Technology in Education (pp. 161-170). Hong Kong.
- Low Hui, M., Abdullah, A., & Mohamed, A. (2013). Publish or perish: Evaluating and promoting scholarly output. Contemporary Issues in Education Research, 6(1), 143-146.
- MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303.
- Mok, K. H., & Cheung, A. B. L. (2011). Global aspirations and strategizing for world-class status: New form of politics in higher education governance in Hong Kong. Journal of Higher Education Policy and Management, 33(3), 231-251.
- Moreno, R., & Park, B. (2010). Cognitive load theory: Historical development and relation to other theories. In L. Plass, R. Moreno, & R. Brünken (Eds.), Cognitive load theory (pp. 9-28). New York, NY: Cambridge University Press.
- Mountains of the Moon University Vice Chancellor’s Task Force Report to a Joint Meeting of Council, Board of Directors, Mountains of the Moon University Government Task Force, and Top Management team. (2019, April). Kalya Courts, Fort Portal, Uganda.
- Mountains of the Moon University. (2017). Quality assurance reports on teaching and learning: A students’ perspective (2016 and 2017).
- Mountains of the Moon University. (2017). Reviewed operational plan for 2017/2018.
- Mountains of the Moon University. (2018). Charter document.
- Mountains of the Moon University. (2018). Reviewed quality assurance policy.
- Mountains of the Moon University. (2020). Students’ enrolment report 2019/2020 as at March 13, 2020. Office of the Academic Registrar.
- Mugenda, O. M., & Mugenda, A. G. (2007). Research methods: Quantitative and qualitative approaches. Nairobi, Kenya: African Centre for Technology Studies-ACTS.
- Noblitt, L., Vance, D. E., & Smith, M. L. D. (2010). A comparison of case study and traditional teaching methods for improvement of oral communication and critical-thinking skills. Journal of College Science Teaching, 39(5), 26-32.
- Noda, A., Hou, A., Shibui, S., & Chou, H. (2018). Restructuring quality assurance frameworks: A comparative study between NIAD-QE in Japan and HEEACT in Taiwan. Higher Education Evaluation and Development, 12(1), 2-18.
- Normand, R. (2016). The politics of standards and quality. In R. Normand (Ed.), The changing epidemic governance of European education, educational governance research 3 (pp. 63-94). New York, NY: Springer.
- OECD. (2015). Education at a glance, interim report: Update of employment and educational attainment indicators. Retrieved from https://www.oecd.org/edu/EAG-Interim-report.pdf
- Oniye, O. A. (2017). Basic steps in conducting educational research. In A. Y. Abdulkareem (Ed.), Introduction in research method in education. Ibadan: AgboAreo Publisher.
- Pajares, F., & Schunk, D. (2002). Self-beliefs in psychology and education: An historical perspective. In J. Aronson (Ed.), Improving academic achievement (pp. 3-21). New York, NY: Academic Press.
- Paul, D. L., & Jeanne, E. O. (2013). Practical planning and design (10th ed.). United States of America: Pearson Education, Inc.
- Pedro, P., & Isabel, S. (2020). The debate on student evaluations of teaching: Global convergence confronts higher education traditions. Teaching in Higher Education. https://doi.org/10.1080/13562517.2020.1863351
- Persky, A. M., Henry, T., & Campbell, A. (2015). An exploratory analysis of personality, attitudes, and study skills on the learning curve within a team-based learning environment. American Journal of Pharmaceutical Education, 79(2), 1-11.
- Pey-Tee, O., Benson, S., & Kam, C. C. S. (2017). Psychometric quality of a student evaluation of teaching survey in higher education. Assessment & Evaluation in Higher Education, 42(5), 788-800. https://doi.org/10.1080/02602938.2016.1193119
- Pradeep Kumar Choudhury, et al. (2019). Student assessment of quality of engineering education in India: Evidence from a field survey. Quality Assurance in Education, 27(1), 103-126. https://doi.org/10.1108/QAE-02-2015-0004
- Punch, K. F. (2005). Introduction to social research: Quantitative and qualitative approaches (2nd ed.). Thousand Oaks, CA: Sage.
- Ramsubramanian, P. (2012). Six Sigma in educational institutions. International Journal of Engineering Practical Research, 1(1), 1-5.
- Revilla, M. A., Saris, W. E., & Krasnick, J. A. (2014). Choosing the number of categories in agree-disagree scales. Sociological Research & Methods, 43, 73-97.
- Rosen, A. S. (2018). Correlations, trends and potential biases among publicly accessible web-based student evaluations of teaching: A large-scale study of RateMyProfessors.com data. Assessment & Evaluation in Higher Education, 43(1), 31-44. https://doi.org/10.1080/02602938.2016.1276155
- Royal, K. D., & Flammer, K. (2015). Measuring academic misconduct: Evaluating the construct validity of the exams and assignments scale. American Journal of Applied Psychology, 4(3–1), 58-64.
- Royal, K. D., Schoenfeld-Tacher, R., & Flammer, K. (2016). Comparing veterinary student and faculty perceptions of academic misconduct. International Research in Higher Education, 1(1), 81-90.
- Ryan, R., & Lynch, M. (2003). Philosophies of motivation and classroom management. In R. Curren (Ed.), Blackwell companion to philosophy: A companion to the philosophy of education (pp. 260-271). New York, NY: Blackwell.
- Schofer, E., & Meyer, J. W. (2016). The worldwide expansion of higher education in the twentieth century. American Sociological Review, 70(6), 898-920. https://doi.org/10.1177/000312240507000602
- Smidt, H. (2015). European quality assurance: A European higher education area success story (overview paper). In A. Curaj, L. Matei, R. Pricopie, J. Salmi, & P. Scott (Eds.), The European higher education area: Between critical reflections and future policies, Part II (pp. 625-637). Springer.
- Standish, T., Joines, J. A., Young, K. R., & Gallagher, V. (2018). Improving SET response rates: Synchronous online administration as a tool to improve evaluation quality. Research in Higher Education, 59(6), 812-823. https://doi.org/10.1007/s11162-017-9488-5
- Stukalina, Y. (2014). Identifying predictors of student satisfaction and student motivation in the framework of assuring quality in the delivery of higher education services. Business, Management & Education, 12(1), 127-137.
- Thanassoulis, E., Dey, P. K., Petridis, K., Goniadis, I., & Georgiou, A. C. (2017). Evaluating higher education teaching performance using combined analytic hierarchy process and data envelopment analysis. Journal of the Operational Research Society, 68(4), 431-445. https://doi.org/10.1057/s41274-016-0165-4
- Uttl, B., & Smibert, D. (2017). Student evaluations of teaching: Teaching quantitative courses can be hazardous to one’s career. PeerJ, 5. https://doi.org/10.7717/peerj.3299
- Wandiembe, P. (2010). Sample survey theory introduction (2nd ed.). Makerere University: Kampala.
- Yin, R. K. (2012). Applications of case study research (3rd ed.). Thousand Oaks, CA: Sage.
- Yossi, H., Baruch, K., & Gali, N. (2020). The relative importance of teaching evaluation criteria from the points of view of students and faculty. Assessment & Evaluation in Higher Education, 45(3), 447-459. https://doi.org/10.1080/02602938.2019.1665623
- Zepke, N. (2017). Student engagement in neo-liberal times: What is missing? Higher Education Research and Development. https://doi.org/10.1080/07294360.2017.1370440
- Zhao, J., & Gallant, D. J. (2012). Student evaluation of instruction in higher education: Exploring issues of validity and reliability. Assessment & Evaluation in Higher Education, 37(2), 227-235. https://doi.org/10.1080/02602938.2010.523819
- Zhu, C. (2013). How innovative are schools in teaching and learning? A case study in Beijing and Hong Kong. Asia-Pacific Education Researcher, 22(2), 137-145.
- Zodpey, S. P. (2004). Sample size and power analysis in medical research. Indian Journal of Dermatology, Venereology, and Leprology, 70(2), 123-128.
Subscribe to Our Newsletter
Subscribe to Our Newsletter
Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.