International Journal of Research and Innovation in Applied Science (IJRIAS)

Submission Deadline-27th January 2025
First Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th February 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th February 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Electronic Assessment Framework for Competency-Based Education amongst Intellectual Disability Students in Technical and Vocational and Training (TVET) Institutions in Nairobi County

  • Anne Barongo
  • Dr. Kennedy Ogada
  • Dr. Dennis Njagi
  • 138-154
  • Nov 29, 2024
  • Education

Electronic Assessment Framework for Competency-Based Education amongst Intellectual Disability Students in Technical and Vocational and Training (TVET) Institutions in Nairobi County

Anne Barongo, Dr. Kennedy Ogada, Dr. Dennis Njagi

Department of Computing, Faculty of Information Communication Technology, Jomo Kenyatta University of Agriculture and Technology, Kenya

DOI: https://doi.org/10.51584/IJRIAS.2024.911011

Received: 14 October 2024; Accepted: 26 October 2024; Published: 29 November 2024

ABSTRACT

The shift to Kenya’s Competency-Based Curriculum (CBC) in Nairobi County’s Technical and Vocational Education and Training (TVET) institutions underscored critical issues related to the exclusion and challenges faced by special needs students with intellectual disability(ID). Despite 13.5% of Kenyan children having these disabilities, only 6% are enrolled in schools, revealing a stark gap in educational accessibility. Existing assessment frameworks in TVET institutions lack inclusivity, perpetuating educational disparities and negatively impacting societal well-being. This study focused on proposing an inclusive Electronic Assessment Framework for Competency-Based Education (CBE) to address these deficiencies and ensure equitable access for students with disabilities. The study’s objective was to review the electronic assessment framework for competency-based education. The study was anchored in the TPACK theoretical framework guiding its process. The research employed a descriptive design, collecting quantitative and qualitative data from 177 respondents, including students with disabilities, caregivers, parents, tutors, institutional heads, and examining body representatives. The study identified several weaknesses in electronic assessment frameworks, with the most significant being a lack of comprehensive accessibility (35.3%). Other notable issues included non-adaptive designs (23.5%), insufficient training for educators (17.6%), and unspecified limitations (23.5%). To address these challenges, respondents suggested adopting universal design principles (30%) to improve accessibility, expanding educator training (24%) to enhance effective use, developing adaptive tools (17%) for diverse learner needs, and implementing stronger data privacy measures (29%) to protect student information. Regression analysis revealed a significant positive relationship between familiarity with these frameworks and their utilization (β = 0.32, p < 0.01), with familiarity explaining 51% of the variance in usage. However, non-adaptive features limited the frameworks’ effectiveness, highlighting the need for further modifications. These findings underscore the importance of continuous improvements to electronic assessment frameworks, including more flexible designs, enhanced accessibility, and comprehensive educator training. Refining these tools will help better support special needs students in TVET institutions, fostering equitable learning environments and promoting improved educational outcomes for all learners.

Keywords: evaluation, competency-based education and training, assessment, certification, TVET, CBC, ID

INTRODUCTION

Intellectual disabilities (ID), such as autism spectrum disorder, Down syndrome, intellectual disability, and Williams’s syndrome, affect 20 to 30 per 1000 individuals, with prevalence continuing to rise. Individuals with ID face challenges in areas like communication, learning, self-care, and social interactions, which vary across life stages (Zablotsky, 2009). Despite their potential to lead fulfilling lives with appropriate support, people with ID experience higher rates of poor health, co-occurring conditions such as mental illness, and preventable deaths. These challenges are exacerbated by limited access to suitable educational services, a lack of ID-specific training for educators, and widespread misunderstanding of intellectual disabilities (Friedman, 2023). Currently, the Kenya lacks a comprehensive framework to guide educational assessments for individuals with intellectual disabilities. There is a need to prioritize outcomes that matter most to those with ID and their carers to improve educational support, accountability, and create electronic assessment models tailored specifically for the ID population (Newell et al., 2023).

In the ever-evolving landscape of education, competency-based education (CBE) emerged as a transformative paradigm that prioritized skill mastery and personalized learning (Sin, 2021). This shift was particularly significant for educators adapting to the demands of the 21st century, recognizing the need to equip students with practical skills beyond traditional academic knowledge (Akala, 2021; Ekabu, 2023). The integration of electronic assessment frameworks played a crucial role in this context, facilitating the measurement of student progress, providing timely feedback, and supporting personalized learning paths tailored to individual competencies (Ludwikowska, 2022). These digital frameworks ensured that competency-based education remained adaptable and effective, meeting diverse educational needs globally (Sin, 2021; Akala, 2021; Ekabu, 2023).

Globally, various countries embraced CBE, utilizing electronic assessment frameworks to enhance their educational models. In the United States, platforms such as Canvas and Blackboard played integral roles in evaluating competencies across different fields (Tatnall, 2023). The UK employed systems like Moodle and Turnitin to support comprehensive skill development (Toale, Morris, & Kavanagh, 2021), while China’s Tencent Classroom revolutionized the assessment landscape by addressing geographical challenges through remote learning (Su, 2021). In Sub-Saharan Africa, the use of electronic assessment frameworks gained traction. For example, Nigeria’s National Open University integrated these frameworks to evaluate competencies across various disciplines (Okagbue et al., 2023), and South Africa utilized platforms like Sakai and Moodle to foster collaboration and critical thinking (Naidoo et al., 2022; Bender, 2012). Kenya also underwent significant educational reforms with the introduction of the Competency-Based Curriculum (CBC), marking a shift towards skill-oriented learning. The Kenya National Examinations Council’s (KNEC) Digital Assessment App played a crucial role in assessing student competencies (Akala, 2021; Ekabu, 2023).

However, within the context of the CBC, challenges persisted, particularly for students with intellectual disabilities. Existing electronic assessment frameworks often lacked accessibility features, creating barriers for students with special needs, and their non-adaptive design might have hindered accurate competency assessments (Newell et al., 2023; Tatnall, 2023). Issues such as limited accommodation features, resource disparities, insufficient training for educators, and data privacy concerns further excluded students with intellectual disabilities from the benefits of the CBC (Su, 2021; Okagbue et al., 2023). Addressing these multifaceted challenges became imperative for fostering inclusivity and mitigating the detrimental impact of disability on societal well-being. A comprehensive exploration of these issues was necessary to guide the development of an Electronic Assessment Framework explicitly tailored for Competency-Based Education among students with intellectual disabilities.

The education of children with intellectual disabilities in Kenya has been a focus since the nation’s independence in 1963. Immediately following independence, the Ominde Commission (Kenya, 1964) recommended a focus on special needs education, and the government appointed the Ngala Commission the same year to advise on matters related to special needs education. As a result, enrolment in special needs education institutions increased ten-fold over the past six decades. In the financial year 2017/2018, the Ministry of Education disbursed capitation grants to 108,221 learners with disabilities, who were enrolled in 290 special primary institutions and 2,057 special units/integrated programs (MoE, 2018). Despite these efforts, emerging evidence indicated that students with disabilities continued to lag behind their peers without disabilities, with disability exacerbating the learning crisis (World Bank, 2019). Factors contributing to this disparity included a lack of curriculum adaptation and exclusion of disability measurement in assessments. This chapter aimed to unmask the multidirectional learning exclusions at the bottom of the pyramid, linked to disability categories, gender, and age. It further examined the effectiveness of examination accommodations instituted by the Kenya National Examinations Council, such as time extensions. The conclusions and policy recommendations of this analysis were summarized into three key messages.

The current assessment frameworks in Kenya faced several critical issues that hindered equity in education, particularly for students with intellectual disabilities. These issues included inadequate adaptation for special needs, where the new curriculum still lacked sufficient adaptations in assessment methods to cater to diverse needs, leading to assessments that did not accurately reflect the capabilities of students with intellectual disabilities (Sin, 2021; Ekabu, 2023). Additionally, there was limited professional development for educators, who often lacked the training and resources needed to implement inclusive assessment practices effectively (Akala, 2021; Newell et al., 2023). Insufficient focus on individualized learning plans further exacerbated the problem, as current frameworks did not adequately incorporate personalized plans that considered the unique strengths and challenges of students with intellectual disabilities (Tatnall, 2023; Su, 2021). Overemphasis on standardized testing also disadvantaged these students, as standardized assessments did not always accommodate their learning styles or provide a comprehensive view of their abilities (Okagbue et al., 2023; Naidoo et al., 2022). Furthermore, the lack of stakeholder involvement, including input from parents, caregivers, and other stakeholders, was a critical issue, as their insights were essential for developing equitable and supportive assessments (Newell et al., 2023; Ekabu, 2023). Lastly, insufficient resources and support, such as assistive technologies and specialized materials, created additional barriers to equitable education (Sin, 2021; Tatnall, 2023). Addressing these issues was crucial for ensuring that assessment frameworks promoted equity and inclusivity. Thus, by incorporating feedback from stakeholders such as the KFLEA Foundation, KILEA Intermediate, and KPre LEA Prevocational programs, the study aimed to develop an assessment model that was both adaptive and aligned with the needs of students with intellectual disabilities (Akala, 2021).
As Kenya transitioned from the traditional 8-4-4 system to the Competency-Based Curriculum (CBC), a critical issue emerged concerning the exclusion and unique challenges faced by students with special needs within this new educational framework. Despite the CBC’s aim to foster inclusivity and align graduates’ skills with market demands, there remain significant barriers for students with special needs. The Kenya National Examinations Council (KNEC) has played a pivotal role in shaping educational assessments through initiatives such as the Kenya Foundation for Learning and Educational Assessment (KFLEA) and the Kenya Institute of Special Education (KISE). These initiatives are designed to develop frameworks and resources to support special needs education (Inyega et al., 2021; Wagner et al., 2022; Tabot, Benedicta & Tuimur, 2022).

However, persistent challenges, including inadequate adaptation of assessments for special needs, limited professional development for educators, insufficient focus on individualized learning plans, and an overreliance on standardized testing, highlight a critical gap in ensuring equitable and effective educational opportunities for these students within the CBC framework.
Despite the CBC’s promise to align graduates’ skills with market demands, the stark reality revealed that only 6% of the estimated 2,489,252 children with disabilities in Kenya were enrolled in school (Akala, 2021; KICD, 2018). Moreover, inadequate physical infrastructure, insufficient teaching and learning materials, and the allocation of only 4% of education resources to special needs schools contributed to broader societal inequalities, leaving households with disabled members more vulnerable to poverty due to unequal access to education, employment, healthcare, and food (KICD, 2018). Amidst these challenges, a crucial gap existed in the current discourse, with limited attention being dedicated to the educational framework for students with intellectual disabilities within the CBC. This situation necessitated focused research and intervention to address these shortcomings. Specifically, this study addressed the following questions: What is the primary electronic assessment framework currently employed within Technical and Vocational Education and Training (TVET)?

Problem Statement

Despite strides in Kenya’s educational reforms, particularly with the adoption of the Competency-Based Curriculum (CBC), students with intellectual disabilities continue to face exclusion due to inadequacies in the current assessment frameworks. Intellectual disabilities, such as autism spectrum disorder, Down syndrome, and Williams’s syndrome, present unique challenges that require specialized support in communication, learning, and social interaction. While the CBC aims to promote skill-based learning and inclusivity, its existing assessment frameworks largely lack the adaptations necessary to accommodate students with intellectual disabilities, leading to disparities in educational access and achievement (Friedman, 2023; (Gichuru et al., 2021).

Current assessments within the CBC are predominantly designed for mainstream learners and do not account for the specific needs of students with intellectual disabilities. This standardization fails to provide equitable measures of competence for these students, often resulting in inaccurate representations of their abilities and progress. Contributing factors include a lack of accessible resources, insufficient training for educators, and a deficiency in adaptive learning tools within the CBC framework (Newell et al., 2023; Tatnall, 2023). Thus, there is a critical need to analyze Kenya’s CBC assessment frameworks to identify the gaps related to the inclusion of students with intellectual disabilities. By addressing these gaps, this study aimed to inform the development of a comprehensive, inclusive assessment model that aligns with the CBC’s vision of inclusive, skills-oriented education and supports equitable learning outcomes for all students. Therefore, this study sought to review the electronic assessment framework for competency-based education.

The Technological Pedagogical Content Knowledge (TPACK) Framework

The theoretical framework for reviewing the electronic assessment framework for competency-based education among students with intellectual disabilities (ID) in Technical and Vocational Education and Training (TVET) institutions is grounded in the Technological Pedagogical Content Knowledge (TPACK) model. This model, developed by Mishra and Koehler (2006), offers an integrative approach for educators to incorporate technology effectively in teaching and learning processes, while simultaneously aligning pedagogical strategies and content delivery (Sierra et al., 2023). The framework highlights the critical need for blending technological knowledge, pedagogical methods, and content expertise to support diverse learners, particularly those with intellectual disabilities.

In the context of this study, the TPACK framework helps to evaluate how well the current electronic assessment frameworks in TVET institutions align with the competencies required for learners with ID. This is especially relevant as competency-based education focuses on personalized learning, where adaptive technologies play a vital role in assessing each student’s unique capabilities (Sin, 2021). TPACK’s emphasis on integrating technology with teaching content ensures that educators are not only using digital tools but are doing so in ways that enhance the learning experience for students with special needs (Cabero-Almenara et al., 2023). By applying TPACK, the study identifies gaps in the current electronic assessment frameworks used in Kenyan TVET institutions. Many frameworks lack adaptive features that can cater to the varied needs of learners with intellectual disabilities, thereby limiting their effectiveness (Tatnall, 2023). The theoretical lens of TPACK provides a structured approach to developing more inclusive and accessible frameworks, enabling the personalization of assessments that are responsive to the individual learning needs of students with intellectual disability.

RESEARCH DESIGN AND METHODOLOGY

The study used a descriptive research design to investigate an electronic assessment framework for inclusive competency-based education in Nairobi County City, Kenya. This design aimed to answer questions of how, what, where, and when, without manipulating or controlling variables but instead observing, recording, and analyzing existing data or information (Kothari, 2014). The study aimed to describe the research phenomenon and establish an electronic assessment framework to evaluate the relationships and effectiveness of inclusive competency-based education programs using both quantitative and qualitative data.

Closed-ended questionnaires were used for quantitative data collection, while qualitative data were gathered through interview schedules. The descriptive design enabled the evaluation of correlations between variables and provided an in-depth understanding of their effects. As McGregor (2017) suggested, this design was suitable for responding to the study’s objectives and questions.

The study was conducted in Nairobi County. It had more than 40 institutions categorized as 10 NGOs, 15 Community-Based Organizations (CBOs), 10 Technical and Vocational Education and Training (TVET) institutions, and 5 Vocational Institutions. In addition, the county had more than 2000 students, 80 tutors, and 40 principals (TVET, 2023).

The study’s target population included a diverse range of educational institutions, with a total of 40 institutions representing a broad spectrum of sectors: 10 NGOs, 15 Community-Based Organizations (CBOs), 10 Technical and Vocational Education and Training (TVET) institutions, and 5 Vocational Institutions. This broad selection encompassed 2000 students, 80 tutors, and 40 principals, with a specific focus on special needs students, particularly those with intellectual. The study aimed to address the unique challenges these students face in competency-based education assessments by incorporating adaptive technologies and strategies tailored to various disabilities.

NGOs and CBOs were integral to this study, playing a crucial role in the educational landscape for special needs education. Their involvement provided valuable insights into community-based approaches and additional support mechanisms necessary for an effective electronic assessment framework. NGOs and CBOs contribute significantly by offering support services, resources, and advocacy for special needs education. They helped in developing and implementing inclusive assessment practices, ensuring that the electronic assessment framework addressed the diverse needs of students with disabilities. This collaboration allowed for a comprehensive evaluation and analysis, integrating community-based insights with institutional practices to create an effective and inclusive assessment system, as illustrated in Table 1.

Table 1: Study Target Population

Type of Institution Number of Institutions Number of Students with intellectual disability (IDD)/parents, caregivers Number of Tutors Number of Principals Number of Examining Bodies
NGOs 10 400 20 10
CBOs 15 600 30 15
TVET Institutions 10 600 20 10 4
Vocational Institutions 5 400 10 5
Total 40 2000 80 40 4

Source: (Author, 2024)

The study employed both census and simple random sampling techniques. Census sampling was a technique in which data was collected from every member or element of the population rather than from a subset or sample (Kothari, 2021). Census sampling was used when the population was relatively small or manageable, and it was feasible to collect data from each member (Chaudhuri, 2017). Therefore, the study used this technique to select officers from all the examining bodies, namely KNEC, KISE, TVETA, and NITA.

In addition, the study used the simple random sampling technique to select students, tutors, and principals to determine the number of participants in each case. According to Kothari (2014), random sampling was a technique used to select a sample by giving every member of an entire population equal chances to participate in the study. Thus, the study employed simple random sampling. A population of two thousand students with intellectual or learning disabilities from institutions (40), including those managed by NGOs (10), CBOs (15), TVET institutions (10), and vocational institutions (5), was picked through random sampling on different courses to assess the frameworks used, their effectiveness, and the skills gained, as shown in Table 2.

A sample represented a subset of a larger population; in this case, its characteristics were the subject of investigation (Kothari, 2014). The sampling techniques employed in the study ensured that the sample was appropriately stratified and representative. Census sampling was used for the examining bodies, such as KNEC, KISE, TVETA, and NITA, where all members were included due to their manageable population size. For students, tutors, and principals, simple random sampling was applied to give every member an equal chance to participate. This ensured diversity in the selection process across institutions managed by NGOs, CBOs, TVET, and vocational institutions. The random sampling was conducted on students enrolled in different courses, and 10% of the population from each group was selected. The sample included 160 students, 8 tutors, and 5 principals, totaling 177 participants, which was calculated based on Mugenda and Mugenda’s (2003) formula of selecting 10% to 30% of the target population. Thus, the stratification was statistical, ensuring representation from various segments of the population while adhering to standard sampling procedures, as shown in Table 2.

Table 2: Distribution of Sample Size

Type of Institution Number of Institutions Number of Students/parents/caregivers (10%) (randomly selected) Number of Tutors (10%) (randomly selected) Number of Principals (10%) (randomly selected) Number of Examining Bodies (Census sampling)
NGOs 10 40 2 1
CBOs 15 60 3 2
TVET Institutions 10 40 2 1 4
Vocational Institutions 5 20 1 1
The total sample size is [177] 40 160 8 5 4

Source: (Author, 2024)

Research Instruments

Due to their cost-effectiveness, questionnaires were used to manage student data while collecting data from a large sample, as students represented the largest category among the target population. Questionnaires were ideal data collection tools for a large sample size (Creswell & Creswell, 2018). The questionnaire collected both quantitative and qualitative data on the effectiveness of the electronic assessment frameworks, including the Electronic Assessment framework for TVET and the developed framework programs, as well as respondents’ age, gender, and education level.

The study used interview schedules to collect primary qualitative data. The interview guide gathered data from tutors, principals, and examining body representatives (see Appendix 3). Interviews were considered practical frameworks for collecting in-depth, textual information from the research participants (Creswell & Creswell, 2018). The questionnaires and the interview guide were developed to complement each other. Kothari (2014) emphasized the need for methodological triangulation to enhance the comprehensiveness and reliability of research findings. Thus, employing multiple data collection methods offered a more holistic understanding of the research phenomenon.

Pretesting checked the validity and reliability of the data collection frameworks by using a small portion of the target respondents (Kothari, 2014). This ensured that the frameworks collected only what was intended in the study. Mugenda and Mugenda (2003) recommended that the pretest sample be based on any number between 1-10% of the actual sample size. The pretest sample was therefore 10% of the actual sample size, which amounted to 18 participants (10/100 * 177), who were not included in the final study. The study frameworks were tested using the institutions’ heads, students, tutors, and examining body representatives directly involved in the learning and assessment of students. The frameworks were also tested using content analysis from the literature review. The pretest findings were instructive in determining the questions’ readiness, eliminating redundancy, and ensuring correctness.

Validity referred to the extent to which a research instrument measured the research phenomenon of interest (McGregor, 2017). The study adopted content and construct validity methods. Content validity examined whether the instrument represented all the aspects of the research construct being studied (Adams, Khan, & Raeside, 2014). The framework needed to include all relevant subjects that the researcher intended to measure, and validity was considered compromised if it missed some aspects of measurement (Saunders, Lewis, & Thornhill, 2016). The content validity of the questionnaire in the study was ascertained by consulting with supervisors. Additionally, construct validity measured the framework’s effectiveness in capturing the intended concept (Creswell & Creswell, 2018). This was determined by analyzing the literature reviewed in Chapter Two.

The reliability of research instruments indicated the extent to which the measurements of the framework yielded similar results after repeated trials (Creswell & Creswell, 2018). The researcher used the test-retest method to assess the questionnaire’s reliability. A Cronbach alpha test was performed on the scores, with a reliability coefficient of 0.8, indicating strong reliability
The questionnaire was administered to respondents as a soft copy via Google Forms for those on busy shifts to allow them to fill it out at a convenient time. Other questionnaires were hand-delivered during working hours. Each questionnaire was accompanied by an introduction letter that introduced the research subject and the researcher, and elaborated on the purpose of the study. Respondents were expected to complete the questionnaire within a week, and reminders were sent to those who did not return their questionnaires.

Interviews were scheduled based on the convenience of the interviewees in terms of time and location. They were conducted either in person or via the Zoom platform, if it was more convenient for the interviewee, and lasted 30 minutes. Interviews were recorded after obtaining the interviewee’s consent.

Data Analysis

The data gathered from the questionnaires was organized into tables and entered into the Statistical Package for Social Sciences (SPSS). Given the empirical nature of the research, several statistical tests were used for data analysis. Descriptive statistics, including frequency, mean, and standard deviation, were utilized to summarize the data. Inferential statistics, such as regression analysis, Pearson correlation, and one-way ANOVA, were employed to examine the relationships between variables and test hypotheses. For qualitative data, content analysis and conceptual analysis were conducted to identify patterns and themes, as follows Objective: To Review Electronic Assessment Frameworks in TVET Colleges for Special Needs Students

The study employed the following Regression Equation as proposed by Kothari (2021),

Y=β0+β1X1+ϵ, Y was the Utilization of Electronic Assessment Frameworks for Special Needs Students (dependent variable, binary: Yes or No), X1 was Familiarity with Electronic Assessment Frameworks for Special Needs Students, which was the independent variable, ordinal: Very Familiar, Familiar, Somewhat Familiar, Not Familiar

• β0 Intercept

• β1: Coefficient for Familiarity

• ϵ: Error term

This equation was designed to measure how familiarity with electronic assessment frameworks tailored for special needs students influenced their utilization in TVET colleges.

FINDINGS, INTERPRETATION AND DISCUSSION

The response rate was 96% as 170 respondents filled out their questionnaires completely, whereas 7 were unfilled, forming 4%.

Demographic Information

Table 3: The demographic information of the study population

Demographic Information Category Percentage (%)
Gender Male 55%
Female 45%
Age Distribution 18-25 years 50%
26-35 years 30%
36-45 years 11%
46-55 years 7%
56+ years 2%
Educational Background Recently Graduated from High School 50%
College 30%
University 16%
Other 4%
Total Respondents 100%

Source: (Author, 2024)

The gender distribution of the respondents is fairly equal with 55% male participants and 45% female participants in the study population. The largest part (50%) of the sample is aged 18-25, suggesting substantial contributors who are aware of new trends in education. Age distribution of the respondents is fairly diverse with 26-35 years accounting for the highest proportion at 30% while 36-45, 46-55 and 56+ account for 11%, 7% and 2% respectively. In regards to the educational level, 50% of the respondents said that they have recently finished high school, 30% have college, 16% university and 4% other. This demographic diversity improves the study by obtaining information from learners at different education levels to propose an electronic assessment model that integrates visually impaired students in TVET institutions.

The Review of Electronic Assessment Framework

The study aimed to review the electronic assessment framework for competency-based education to gain a deeper understanding of its effectiveness in improving the inclusive learning mechanism of intellectual disability and/or developmental disability and learning disabilities of students in TVET institutions. The results of the study findings were as follows:

Familiarity with Various Electronic Assessment Frameworks

The study sought to deduce the study population’s familiarity of the available electronic assessment framework with the CBC Framework and how they effective enhance the inclusive education for intellectual disability and learning disabilities of students and the findings were as shown in Figure 1. The data analysis revealed that 10% of respondents were very familiar with various electronic assessment frameworks, 30% were familiar, 40% were somewhat familiar, and 20% were not familiar with these frameworks.

Figure 1: The Review of Electronic Assessment Framework
Source 🙁 Author, 2024)

This result is in line with previous studies that indicate technological adoption in education is often limited by lack of adequate training and exposure (Abd Majid & Mohd Shamsudin, 2019). This gap is particularly critical when working with intellectually disabled (ID) students, as technology plays a crucial role in tailoring assessments to meet their unique needs. The study results concur with Banes and Behnke (2019), who noted that for students with intellectual disabilities, the use of technology through frameworks like Universal Design for Learning (UDL) is key to fostering inclusivity in assessments.

The findings also align with Ekabu’s (2023) conclusions that insufficient teacher training limits effective technology integration, which is especially detrimental for ID students who require adaptive tools to access and participate in competency-based education (CBE). The study results further agree with Govender and Rajkoomar (2021), who found that educators’ limited technological proficiency negatively impacts their ability to provide inclusive learning environments for students with special needs.

Moreover, the observed transitional phase among respondents who are somewhat familiar with electronic frameworks is consistent with Ibrahim and Shiring’s (2022) findings, which emphasize the need for targeted professional development. Such training is essential to equip educators with the necessary skills to effectively use these tools, particularly for assessments involving students with intellectual disabilities, who rely on customized support to demonstrate their competencies. Thus, the results underscore the importance of increasing professional development opportunities to ensure that educators are not only familiar with, but also proficient in, using electronic assessments to support the diverse needs of students, particularly those with intellectual disabilities.

Utilization of Electronic Assessment Frameworks

The survey results indicated that a majority of respondents had experience with electronic assessment frameworks in educational settings. The results of the analysis (Figure 2) revealed that 60% of the study population reported having utilized these frameworks, while 40% had not. This distribution suggested that while there was a considerable base of experience among the respondents, a significant portion lacked direct exposure to these tools, potentially impacting their ability to evaluate them comprehensively.

Figure 2: Utilization of Electronic Assessment Frameworks
Source 🙁 Author, 2024)

These results are in line with the findings of Govender and Rajkoomar (2021), who emphasized that electronic assessments can enhance the inclusivity and adaptability of teaching methods, particularly for students with special needs. As one tutor remarked, “These frameworks have made assessments more adaptable, allowing us to modify tasks based on the specific abilities of each learner, especially those with intellectual disabilities.”

However, the 40% who reported not utilizing these frameworks raises concerns about accessibility and training. This finding concurs with Ibrahim and Shiring’s (2022) results, which highlighted that lack of familiarity and training hinders the effective implementation of technological tools in classrooms. A principal commented, “We have seen improvements in schools where teachers are trained, but many still feel unprepared to fully integrate these frameworks, especially for assessments involving students with disabilities.” This is consistent with Ekabu’s (2023) argument that the successful implementation of competency-based assessments, particularly for students with ID, depends heavily on professional development and training in technology use. Thus, while a majority of educators have begun utilizing electronic assessment frameworks, further training and access are needed to ensure that all students, especially those with intellectual disabilities, can benefit from more inclusive and personalized assessments. The study results suggest a need for increased professional development in this area to bridge the gap between potential and actual utilization.

Strengths of the Existing Electronic Assessment Frameworks

Among those who had used electronic assessment frameworks, several strengths were identified, as shown in Table 4. The study revealed several strengths of electronic assessment frameworks that have significant implications for enhancing the educational experience of students with intellectual disabilities (ID) in Technical and Vocational Education and Training (TVET) institutions.

Table 4: Strengths of Electronic Assessment Frameworks

Strength Frequency Percentage
Adaptive learning features 50 29%
Real-time feedback 40 24%
Accessibility options for different disabilities 30 18%
Others (not specified) 52 29%

Source 🙁 Author, 2024)

The data results revealed that the most frequently cited strengths include adaptive learning features, which were identified by 29% of the respondents. These features allow the assessment tools to adjust to the unique learning needs of each student, enabling a more personalized and effective evaluation process.

Real-time feedback, cited by 24% of respondents, is another crucial strength of electronic assessments. This feature facilitates earlier identification of learning challenges and allows for immediate referrals to specialists when necessary. By providing instant reports based on personalized assessments, educators can quickly implement mitigation measures tailored to the individual student’s needs. Additionally, 18% of respondents highlighted the accessibility options available in electronic frameworks, which cater to various disabilities, ensuring that all students, regardless of their challenges, have equal opportunities to succeed.

Moreover, electronic assessment frameworks provide valuable background information for students, helping educators understand each student’s learning history and context. This comprehensive understanding allows for better-targeted interventions, improving the overall effectiveness of the educational strategies employed. The combination of these strengths makes electronic assessment frameworks a powerful tool in enhancing the learning outcomes for students with ID in TVET institutions.

Weaknesses of Electronic Assessment Frameworks

Conversely, several weaknesses were also observed, as shown in Figure 3. The most significant weakness, cited by 35.3%, was the lack of comprehensive accessibility. This suggested that many frameworks failed to fully accommodate all students, particularly those with diverse disabilities. The non-adaptive design of some frameworks was highlighted by 23.5%, indicating that these tools often did not adjust to the varying competency levels of students, thus hindering effective assessment. Insufficient training for educators on the usage of these frameworks was a concern for 17.6%, pointing to a gap in the necessary support and knowledge for effective implementation. Lastly, 23.5% mentioned other unspecified weaknesses, suggesting additional
areas needing improvement.

Figure 3: Weaknesses of Electronic Assessment Frameworks
Source 🙁 Author, 2024)

This finding aligns with Banes and Behnke (2019), who noted the importance of inclusive design in assessment tools. Additionally, 23.5% cited non-adaptive designs, meaning these frameworks did not adjust to varying student competency levels, thereby hindering effective evaluation. Another 17.6% pointed to insufficient training for educators, echoing the concerns raised by Ibrahim and Shiring (2022) about gaps in educator preparedness. Finally, 23.5% of respondents mentioned other unspecified weaknesses, suggesting that further improvements are needed to make these frameworks fully functional and inclusive for all learners. Therefore, these weaknesses suggest that, despite the potential of electronic assessments, substantial improvements are needed to ensure they truly support all learners, especially those with intellectual and physical disabilities. Addressing these gaps is critical for achieving equitable and meaningful assessments.

Suggestions for improving the Electronic Assessment Frameworks

The study sought seek the study population’s suggestions for improving the existing electronic assessment frameworks within the CBC framework. The respondents provided several key suggestions to enhance the effectiveness of electronic assessment frameworks for competency-based education. These suggestions addressed critical areas requiring attention to better support special needs students in Technical and Vocational Education and Training (TVET) institutes, as shown in Figure 4.

The findings indicate that 30% of respondents emphasized the importance of incorporating universal design principles to enhance accessibility for all students, promoting equity in assessments. Additionally, 24% highlighted the need for extensive educator training on electronic assessment tools to ensure effective implementation. Another 17% recommended developing adaptive tools tailored to students with various disabilities for more personalized evaluations. Lastly, 29% stressed the importance of robust data privacy measures to protect student information.

Figure 4: Suggestions for improving the Electronic Assessment Frameworks
Source 🙁 Author, 2024)

These results are consistent with the growing emphasis on inclusive education, as highlighted by Banes and Behnke (2019), who discuss the evolution of Universal Design for Learning (UDL). The findings underscore the need for universal design principles (30%) to enhance accessibility, extensive educator training on electronic assessment tools (24%), and the development of adaptive tools for students with disabilities (17%). Additionally, the importance of robust data privacy measures (29%) aligns with ethical considerations in educational technology. Collectively, these insights call for a comprehensive approach to fostering equity in assessments and creating supportive learning environments for all students.

Regression Analysis Results

This regression analysis sought to measure how varying degrees of familiarity with electronic assessment frameworks influenced their utilization among special needs students in TVET colleges. The results were as follow in Table 5. The analysis revealed that familiarity with electronic assessment frameworks for special needs students significantly predicted their utilization (β = 0.32, p < 0.01). The model accounted for 51% of the variance in utilization, indicating that greater familiarity was associated with higher utilization rates in TVET colleges.

Table 5: Regression Analysis Results

Model Summary Value
0.53
Adjusted R² 0.51
ANOVA Value
F 27.35
p-value < 0.01
Coefficients β Standard Error t-value p-value
Intercept 1.25 0.28 4.46 < 0.01
Familiarity Level 0.32 0.07 4.57 < 0.01

Source: (Author, 2024)

These findings pointed to a gap in the widespread adoption and comprehensive understanding of electronic assessment frameworks. This was echoed by Shafie, Abd Majid, and Ismail (2019), who found a similar divide in technological competence among educators. One respondent’s experience reflected this gap: “I’ve heard about various frameworks but haven’t had much hands-on experience with them. It’s a bit overwhelming to catch up.”

Strengths identified included adaptive learning features and real-time feedback, which were recognized as positive aspects of existing frameworks. These elements were consistent with Banes and Behnke’s (2019) findings, which highlighted their significant impact on inclusive education. A respondent appreciated the adaptive features: “The adaptive learning features are great because they allow for personalization, which helps meet individual student needs.” Another respondent valued real-time feedback: “Immediate feedback helps keep students engaged and allows for quick corrections, which is crucial for their development.”

However, the analysis also uncovered notable weaknesses, such as inadequate accessibility and non-adaptive designs. These issues aligned with concerns raised by Lyner-Cleophas (2019), who critiqued frameworks for failing to address diverse learner needs. One interviewee expressed frustration: “The current frameworks often overlook students with disabilities. It’s disheartening to see that some students are left behind due to accessibility issues.” Further support for these concerns was provided by Kim and Lee (2020), who found that many frameworks did not accommodate various learning styles and disabilities, thus hindering inclusive education. Anderson and McCormick (2021) also noted that the rigidity of many frameworks limited their effectiveness, particularly in adapting to evolving educational contexts.

CONCLUSIONS

The research findings have several important implications for the development and implementation of electronic assessment frameworks in competency-based education:

Need for Greater Flexibility and Adaptability

The review and subsequent development of the electronic assessment framework highlighted the importance of flexibility and adaptability in supporting competency-based education. The framework’s success in accommodating diverse learning needs underscores the necessity of designing tools that can adjust to individual student requirements and learning styles.

Importance of Inclusivity in Design

The development of an inclusive framework demonstrated that accessibility must be a core consideration in the design of electronic assessment tools. Ensuring that the framework supports all students, including those with disabilities, is essential for achieving equitable educational outcomes.

Significance of Real-Time Feedback and Tracking

The evaluation of the developed framework confirmed the value of real-time feedback and effective tracking mechanisms in supporting student progress. These features contribute to a more personalized and responsive learning experience, which is crucial for the success of competency-based education.

Ongoing Training and Support for Educators

The challenges identified in the evaluation phase indicate that ongoing training and support for educators are critical for the successful implementation of electronic assessment frameworks. Educators need comprehensive training to effectively utilize new tools and integrate them into their teaching practices.

RECOMMENDATIONS

Based on the findings, the following recommendations are proposed

1. Enhance Flexibility and Adaptability

Developers should focus on creating electronic assessment frameworks that offer greater flexibility and adaptability to meet the diverse needs of competency-based education. This includes incorporating customizable assessment pathways and supporting various learning styles.

2. Prioritize Inclusivity in Design

Frameworks should be designed with inclusivity as a central principle. This involves ensuring that all features and functionalities are accessible to students with disabilities and that the framework supports equitable learning opportunities for everyone.

3. Implement Real-Time Feedback Mechanisms

Incorporate real-time feedback mechanisms into electronic assessment frameworks to provide immediate support and guidance to students. This feature is essential for facilitating personalized learning and helping students stay on track with their competencies.

4. Provide Comprehensive Educator Training

Institutions should offer extensive training programs for educators to help them effectively use the new electronic assessment frameworks. Training should cover both the technical aspects of the framework and strategies for integrating it into instructional practices.

5. Address Challenges in Accessibility Implementation

Developers should address any challenges related to the consistent implementation of accessibility features across the framework. Ensuring that all components are fully accessible is crucial for maintaining the framework’s inclusivity.

Implementation

These recommendations should be adopted by educational technology developers, competency-based education institutions, and policy makers. Developers are responsible for incorporating the suggested features into their tools, while institutions should focus on providing the necessary training and support for educators. Policy makers should ensure that standards for accessibility and inclusivity are upheld and promote ongoing feedback and refinement of assessment tools.

Suggestions for Further Research

To build on the findings of this study, several areas for further research are suggested:

1. Longitudinal Impact Studies

Future research could explore the long-term effects of implementing inclusive electronic assessment frameworks on student outcomes and engagement. Longitudinal studies would provide insights into the sustained effectiveness of these tools and highlight areas for further improvement.

2. Comparative Analysis of Assessment Frameworks

Comparative studies could analyze different electronic assessment frameworks to identify best practices and benchmarks. This research would offer valuable insights into the relative strengths and weaknesses of various frameworks and inform the development of more effective tools.

3. Exploration of Emerging Technologies

Investigating the potential of emerging technologies, such as artificial intelligence and machine learning, could reveal new opportunities for enhancing electronic assessment frameworks. Research in this area might explore how these technologies can be used to improve personalization and adaptability in assessment tools.

REFERENCES

  1. Abd Majid, F., & Mohd Shamsudin, N. (2019). Identifying factors affecting acceptance of virtual reality in classrooms based on technology acceptance model (TAM). Asian Journal of University Education, 15(2), 51. https://doi.org/10.24191/ajue.v15i2.7556
  2. Akala, B. M. (2021). Revisiting education reform in Kenya: A case of Competency Based Curriculum (CBC). Social Sciences & Humanities Open, 3(1), 100107. https://doi.org/10.1016/j.ssaho.2021.100107
  3. Akala, B. M. (2021). Revisiting education reform in Kenya: A competency-based curriculum (CBC) case. Social Sciences &amp; Humanities Open, 3(1), 1–8. https://doi.org/10.1016/j.ssaho.2021.100107
  4. Amutabi, M. N. (2019). Competency Based Curriculum (CBC) and the end of an Era in Kenya’s Education Sector and Implications for Development: Some Empirical Reflections. Journal of Popular Education in Africa. 3(10), 45 – 66
  5. Anud, E. M., & Caro, V. B. (2023). Relationship between technological pedagogical and content knowledge (TPACK) self-efficacy, 21st century instructional skills and performance of Science Teachers. Advances in Social Science, Education and Humanities Research, 620–636. https://doi.org/10.2991/978-2-38476-056-5_60
  6. Banes, D., & Behnke, K. (2019). The potential evolution of Universal Design for Learning (UDL) through the lens of technology innovation. Universal Access through Inclusive Instructional Design, 323–331. https://doi.org/10.4324/9780429435515-43
  7. Bender, T. (2012). Discussion-based online teaching to enhance student learning: Theory, practice, and assessment. Stylys.
  8. Chaudhuri, A. (2017). Survey sampling. New Delhi: Chapman and Hall/CRC.
  9. Donnelly, R., Kennelly, I., & McAvinia, C. (2022). A multimodal framework for supporting Academic Writers’ Perspectives, practice and performance. Teaching in Higher Education, 1–17. https://doi.org/10.1080/13562517.2022.2048365
  10. Ekabu, P. K. (2023). Teacher educator’s pedagogical effectiveness in implementing competency-based teacher education and competence-based assessment programs: A case of primary diploma teacher training colleges in Kenya. Journal of Education and Practice. https://doi.org/10.7176/JEP/14-33-07
  11. Friedman C. Medicaid home- and community-based services waivers for people with intellectual and developmental disabilities. Intellect Dev Disabil. 2023;61(4):269-279.
  12. Gichuru, F. M., Khayeka-Wandabwa, C., Olkishoo, R. S., Marinda, P. A., Owaki, M. F., Kathina, M. M., & Yuanyue, W. (2021). Education curriculum transitions in Kenya—an account and progress to competency-based education policy. Curriculum Perspectives. https://doi.org/10.1007/s41297-021-00137-5
  13. Govender, R., & Rajkoomar, M. (2021). Transitions in pedagogies: A multimodal learning, teaching and assessment model in Higher Education. Covid-19: Interdisciplinary Explorations of Impacts on Higher Education, 57–74. https://doi.org/10.52779/9781991201195/04
  14. Ibrahim, A., & Shiring, E. (2022). The relationship between educators’ attitudes, perceived usefulness, and perceived ease of use of instructional and web-based technologies: Implications from technology acceptance model (TAM). International Journal of Technology in Education, 5(4), 535–551. https://doi.org/10.46328/ijte.285
  15. Inyega, J. O., Arshad-Ayaz, A., Naseem, M. A., Mahaya, E. W., & Elsayed, D. (2021). Post-Independence Basic Education in Kenya: An Historical Analysis of Curriculum Reforms. FIRE: Forum for International Research in Education, 7(1), 1–23. https://doi.org/10.32865/fire202171219
  16. Jackson, R. M., & Lapinski, S. D. (2019). Restructuring the blended learning environment on campus for equity and opportunity through UDL. Transforming Higher Education through Universal Design for Learning, 297–311. https://doi.org/10.4324/9781351132077-18
  17. Kapucu, N., & Koliba, C. (2017). Using competency-based portfolios as a pedagogical framework and assessment strategy in MPA programs. Journal of Public Affairs Education, 23(4), 993–1016. https://doi.org/10.1080/15236803.2017.12002301
  18. Karisa, A., McKenzie, J., & De Villiers, T. (2021). “It’s a school but it’s not a school”: understanding father involvement in the schooling of children with intellectual disabilities in Kenya. International Journal of Inclusive Education, 1–16. https://doi.org/10.1080/13603116.2021.1980123
  19. khan, R., & Gul, F. (2022). Exploring the relationship between digital literacy skills and technological pedagogical and content knowledge (TPACK) among secondary school teachers. Global Social Sciences Review, VII (II), 196–206. https://doi.org/10.31703/gssr.2022 (vii-ii).19
  20. KICD (2018). Report on Competence Based Curriculum reforms presented to the National Steering Committee on 3rd January 2018. Nairobi: Ministry of Education
  21. Kothari, B. L. (2021). Research methodology tools and techniques. Jaipur Abd Publ.
  22. Kothari, C. R. (2014). Research Methodology: Methods and Techniques. New Delhi: New Age International Ltd.
  23. Lacković, N., & Olteanu, A. (2023). Multimodal identity in Higher Education, an identity+. Relational and Multimodal Higher Education, 243–266. https://doi.org/10.4324/9781003155201-12
  24. Ludwikowska, K. (2022). Competency-based tests as a framework for teacher evaluation in Higher Education Institutions. Central European Management Journal, 30(3), 85–111. https://doi.org/10.7206/cemj.2658-0845.83
  25. Lyner-Cleophas, M. (2019). The prospects of Universal Design for learning in South Africa to facilitate the inclusion of all learners. Universal Access through Inclusive Instructional Design, 35–45. https://doi.org/10.4324/9780429435515-5
  26. Maate, P. M. (2016). E-portfolio model for student assessment in education : a case of Nairobi secondary schools (Thesis). Strathmore University. Retrieved on 14th August, 2023, from http://su-plus.strathmore.edu/handle/11071/4854
  27. Muchira, J. M., Morris, R. J., Wawire, B. A., & Oh, C. (2023). Implementing competency-based curriculum (CBC) in Kenya: Challenges and lessons from South Korea and USA. Journal of Education and Learning, 12(3), 62. https://doi.org/10.5539/jel.v12n3p62
  28. Mugambi, M. M., & Chepkonga, S. Y. (2022). Application of pragmatism to competency-based curriculum (CBC) in Kenya: An analysis of basic education curriculum framework. International Journal of Current Science Research and Review, 05(10), 3984–3992. https://doi.org/10.47191/ijcsrr/v5-i10-22
  29. Mugenda, O., & Mugenda, A. (2003). Research methods. Quantitative & qualitative approaches, Nairobi: Acts press.
  30. Mwakyobwe, V. E., & Shawa, M. C. (2023). Pedagogical and assessment practices towards competency-based education in Tanzania teacher colleges. Asian Journal of Education and Social Studies, 40(2), 1–12. https://doi.org/10.9734/ajess/2023/v40i2867
  31. Naidoo, M., Brijlal, P., Cader, R., Gordon, N. A., Rayner, C. A., & Viljoen, K. (2022a). Development of a competency-based Clinical Assessment Instrument for exit level oral hygiene students at the University of Western Cape. BMC Oral Health, 22(1). https://doi.org/10.1186/s12903-022-02498-3
  32. Naidoo, M., Brijlal, P., Cader, R., Gordon, N. A., Rayner, C. A., & Viljoen, K. (2022b). Development of a competency-based Clinical Assessment Instrument for exit level oral hygiene students at the University of Western Cape. BMC Oral Health, 22(1). https://doi.org/10.1186/s12903-022-02498-3
  33. Newell V, Phillips L, Jones C, Townsend E, Richards C, Cassidy S. A systematic review and meta-analysis of suicidality in autistic and possibly autistic people without co-occurring intellectual disability. Mol Autism. 2023;14(1):12. doi:10.1186/s13229-023-00544-7PubMedGoogle ScholarCrossref
  34. Nganga, C. S. (2023). Language and ideologies: A critical analysis of the basic education curriculum framework (BECF) during the transition from 8-4-4 to the competence-based curriculum (CBC) in Kenya. European Journal of Education Studies, 10(9). https://doi.org/10.46827/ejes.v10i9.4967
  35. Nyaboke, R., Kereri, D., & Loice Kerubo Nyabwari, L. K. (2024). View of competence-based curriculum (CBC) in Kenya and the challenge of vision 2030. Ijets.org; International Journal of Education, Technology and Science. https://ijets.org/index.php/IJETS/article/view/24/21
  36. Okagbue, E. F., Ezeachikulo, U. P., Nchekwubemchukwu, I. S., Chidiebere, I. E., Kosiso, O., Ouattaraa, C. A., & Nwigwe, E. O. (2023). The effects of covid-19 pandemic on the education system in Nigeria: The role of competency-based education. International Journal of Educational Research Open, 4, 100219. https://doi.org/10.1016/j.ijedro.2022.100219
  37. Ross, J., Curwood, J. S., & Bell, A. (2020). A multimodal assessment framework for higher education. E-Learning and Digital Media, 17(4), 290–306. https://doi.org/10.1177/2042753020927201
  38. Shafie, H., Abd Majid, F., & Ismail, I. S. (2019). Technological Pedagogical Content Knowledge (TPACK) in teaching 21st century skills in the 21st Century classroom. Asian Journal of University Education, 15(3), 24. https://doi.org/10.24191/ajue.v15i3.7818
  39. Sholikah, M., & Sutirman, S. (2020). How technology acceptance model tam) factors of electronic learning influence education service quality through students’ satisfaction. TEM Journal, 1221–1226. https://doi.org/10.18421/tem93-50
  40. Sierra, Á. A. J., Iglesias, J. M. O., Cabero-Almenara, J., & Palacios-Rodríguez, A. (2023). Development of the teacher’s technological pedagogical content knowledge (TPACK) from the Lesson Study: A systematic review. Frontiers in Education, 8. https://doi.org/10.3389/feduc.2023.1078913
  41. Sin, H. Y. (2021). Competency assessment framework and faculty assessors for competency-based Pharmacy Education: A preliminary study of its applications and advantages. Korean Journal of Clinical Pharmacy, 31(4), 285–292. https://doi.org/10.24304/kjcp.2021.31.4.285
  42. Son, Y., & Shin, S. (2020). A case study on competency-based curriculum at college level in France. The Korea Association for Care Competency Education, 5(2), 67–81. https://doi.org/10.52616/jccer.2020.5.2.67
  43. Su, W. (2021). Rubric-based self-assessment of Chinese-english interpreting. Testing and Assessment of Interpreting, 67–84. https://doi.org/10.1007/978-981-15-8554-8_4
  44. Syatriana, E. (2019). A Model of Creating Instructional Materials Based on the School Curriculum for Indonesian Secondary Schools. https://doi.org/10.31219/osf.io/z8gf9
  45. Tabot, Benedicta A, & Tuimur, H. N. (2022). The Role of Curriculum Objectives and Evaluation in Development of Instructional Efficacy in Special Needs Education among Primary Teacher Trainees in Kenya. Kabianga.ac.ke. https://doi.org/2736-4534
  46. Tatnall, A. (2023). Editorial for EAIT issue 5, 2023. Education and Information Technologies, 28(5), 4819–4830. https://doi.org/10.1007/s10639-023-11819-0
  47. Thibodeau, T. (2019). UDL, online accessibility, and virtual reality. Transforming Higher Education through Universal Design for Learning, 329–345. https://doi.org/10.4324/9781351132077-20
  48. Toale, C., Morris, M., & Kavanagh, D. O. (2021). Assessing operative skill in the competency-based education era. Annals of Surgery, 275(4). https://doi.org/10.1097/sla.0000000000005242
  49. Wagner, D. A., Castillo, N. M., & Lewis, S. (2022). Learning, Marginalization, and Improving the Quality of Education in Low-income Countries. Open Book Publishers.
  50. Yan, E. (2021). Chinese University Students’ perceptions of faculty technological pedagogical content knowledge (TPACK), technology integration, and level of autonomous learning. Proceedings of the 2021 AERA Annual Meeting. https://doi.org/10.3102/1681227
  51. Yıldırır, A. (2023). Assessment and evaluation of Cardiology Residency Training in Türkiye: A national survey. The Anatolian Journal of Cardiology, 580–591. https://doi.org/10.14744/anatoljcardiol.2023.3282
  52. Zablotsky, B., Black, L. I., Maenner, M. J., Schieve, L. A., Danielson, M. L., Bitsko, R. H., Blumberg, S. J., Kogan, M. D., & Boyle, C. A. (2019). Prevalence and Trends of Developmental Disabilities among Children in the United States: 2009-2017. Pediatrics, 144(4), e20190811. https://doi.org/10.1542/peds.2019-0811

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

9 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

GET OUR MONTHLY NEWSLETTER