Page 19
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
Evaluation of the Content and Face Validity of the STEM-TPACK
Teaching and Learning Practice, Professional Development, Attitude,
and Self-Efficacy Instrument (STPAS-I) among Polytechnic
Lecturers
Falinah @ Fazlina Misol @ Nasip
1
, Denis Andrew D. Lajium
2
1
Politeknik Kota Kinabalu
2
Universiti Malaysia Sabah
DOI: https://dx.doi.org/10.47772/IJRISS.2025.927000003
Received: 12 November 2025; Accepted: 18 November 2025; Published: 26 November 2025
ABSTRACT
This study aimed to evaluate the content and face validity of the Science, Technology, Engineering, and
Mathematics- Technological Pedagogical and Content Knowledge (STEM-TPACK) Teaching and Learning
Practice, Professional Development, Attitude, and Self-Efficacy Instrument (STPAS-I) developed for Malaysian
polytechnic lecturers. The STPAS-I consists of 69 items across four constructs: STEM-TPACK Teaching and
Learning Practice, Professional Development, Attitude, and Self-Efficacy. A quantitative descriptive design was
employed, involving seven experts in STEM education and psychometrics to assess item clarity, relevance, and
representativeness. The Face Validity Index (FVI), Content Validity Index (CVI), and Content Validity Ratio
(CVR) were used for analysis. Findings revealed that the FVI demonstrated excellent clarity and
comprehensibility, with item-level (I-FVI) values ranging from 0.86 to 1.00 and an overall S-FVI/Ave of 0.99.
For content validity, I-CVI values ranged between 0.86 and 1.00, with S-CVI/Ave values of 0.96 for STEM-
TPACK Teaching and Learning Practice, 0.98 for Professional Development, 1.00 for Attitude, and 0.99 for
Self-Efficacy. The CVR results also showed strong expert consensus, with 48 items achieving a perfect score of
1.00 and 8 items scoring 0.71, exceeding the acceptable threshold. Overall, the STPAS-I demonstrated high face
and content validity, confirming its suitability for assessing STEM-related teaching competencies among
polytechnic lecturers. Future research should focus on pilot testing and psychometric evaluation, including
exploratory and confirmatory factor analyses for broader educational settings.
Keywords: STEM-TPACK, professional development, attitude, self-efficacy, content validity, face validity
INTRODUCTION
In recent years, integrating Science, Technology, Engineering, and Mathematics (STEM) education with
Technological Pedagogical and Content Knowledge (TPACK) has become a global educational priority. The
STEM-TPACK framework highlights teachers’ ability to effectively incorporate technology in teaching STEM
content, fostering students’ higher-order thinking and problem-solving skills (Tan, Purwaningsih, Taqwa, Putri,
& Kurniawan, 2024; Chai, Jong, & Yan, 2020; Chai, Rahmawati, & Jong, 2020; Thibaut et al., 2018; Mishra &
Koehler, 2006). In Malaysia, the Ministry of Higher Education (MOHE) continues to emphasize STEM-focused
teaching in polytechnics, aiming to equip graduates with interdisciplinary skills that support the Fourth Industrial
Revolution (IR 4.0) agenda (Idris & Bacotang, 2023; Zaid & Kamin, 2023; Azman & Ibrahim, 2023; Zulnaidi,
Abdul Rahim, & Mohd Salleh, 2020).
However, effective STEM-TPACK implementation depends heavily on lecturers’ professional development,
attitudes, and self-efficacy toward teaching innovation (Mansour, Said, & Abu-Tineh, 2024; Wilson, Riggs, &
Bohn, 2023). Professional development ensures that lecturers acquire up-to-date pedagogical and technological
Page 20
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
competencies, while positive attitudes and strong self-efficacy influence their motivation and confidence to apply
integrated STEM teaching strategies. Despite growing attention to these factors, there remains a lack of validated
instruments specifically designed to measure polytechnic lecturers STEM-TPACK teaching and learning
practices alongside professional development, attitude, and self-efficacy constructs in the Malaysian context.
Instrument development in educational research requires rigorous validity evaluation to ensure that the
instrument accurately measures the intended constructs (Setiawan, Wagiran, & Alsamiri, 2024; Zamanzadeh et
al., 2015; Polit & Beck, 2006). Two critical aspects of validity are content validity and face validity. Content
validity assesses the extent to which items adequately represent the construct domain, typically judged by experts
through quantitative indices such as the Content Validity Index (CVI) (Ahmad Fakhrin & Idris, 2025;
Kamarulzaman, Mohamed Shuhidan, Abdul Wahab, & Toha, 2023). Face validity examines the clarity,
relevance, and comprehensibility of items from the perspective of potential respondents, ensuring the instrument
appears suitable and unambiguous in the target context (Ibrahim & Mohd Matore, 2024). Establishing both types
of validity is essential before proceeding to large-scale data collection or advanced analyses such as factor
analysis or structural equation modelling (Keetharuth, Brazier, Connell, Kharicha, & Forbes, 2018).
Therefore, this study aims to evaluate the content and face validity of the STEM-TPACK Teaching and Learning
Practice, Professional Development, Attitude, and Self-Efficacy Instrument developed for polytechnic lecturers.
The validation process involved a panel of experts assessing item relevance, representativeness, and clarity, as
well as a group of respondents reviewing the instrument for comprehensibility and appearance. The findings
provide empirical evidence on the suitability and quality of the developed instrument, ensuring it meets
psychometric standards for future use in evaluating STEM-TPACK teaching and learning practices in Malaysian
polytechnics.
LITERATURE REVIEW
The integration of STEM education and the TPACK framework has become increasingly essential in preparing
educators to teach effectively in the 21st-century learning environment. The TPACK model, proposed by Mishra
and Koehler (2006), emphasizes the dynamic interaction between teachers’ knowledge of technology, pedagogy,
and content, forming a foundation for effective technology integration in teaching. In the context of STEM
education, TPACK provides a comprehensive framework that supports educators in designing and implementing
multidisciplinary lessons that foster critical thinking, creativity, and innovation (Abdullah & Mahmud, 2024;
Thibaut et al., 2018). Studies have shown that educators with a high level of TPACK competency are more
capable of integrating technology to enhance student engagement and learning outcomes (Sonsupap, Cojorn, &
Sitti, 2024; Scott, 2021; Usman, Auliya, Susita, & Marsofiyati, 2022).
Within Malaysian polytechnics, the implementation of STEM-TPACK-oriented teaching is aligned with national
education policies that promote Science, Technology, Engineering, and Mathematics as key areas for industrial
growth and innovation (Ministry of Higher Education Malaysia, 2024). Polytechnic lecturers play a vital role in
equipping students with STEM-related skills relevant to Industry 4.0 demands. However, research indicates that
many lecturers face challenges in integrating technology effectively into STEM teaching due to limited
pedagogical training, inconsistent professional development programs, and varying attitudes toward instructional
technology (Yunus & Joblie, 2022; Omar, 2021). These challenges underline the importance of assessing
lecturers’ STEM-TPACK teaching and learning practices together with their professional development
experiences, attitudes, and self-efficacy levels to strengthen STEM education quality in higher learning
institutions.
Professional development is widely recognized as a key driver of teaching competency and innovation.
Continuous training allows educators to update their pedagogical and technological skills and apply new
strategies in the classroom (Napitupulu et al., 2025). In STEM contexts, effective professional development
programs should promote hands-on, inquiry-based learning experiences that foster integrated STEM teaching
(Capraro et al., 2021). Moreover, lecturers’ attitudes toward STEM teaching influence their willingness to adopt
new instructional technologies and cross-disciplinary teaching methods (Mohamad Hasim et al., 2022; Cribbs,
Duffin, & Day, 2022). A positive attitude correlates with higher engagement and creativity in lesson design,
Page 21
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
while negative attitudes can hinder technology adoption and innovative practice. Equally important, self-
efficacy, the belief in one’s own capability to perform specific teaching tasks, strongly predicts how effectively
lecturers implement STEM-TPACK strategies (Bandura, 1997; Palmer et al., 2015; Kim & Park, 2023). High
self-efficacy encourages risk-taking and perseverance in integrating technology into complex STEM lessons,
whereas low self-efficacy often results in resistance or minimal adoption.
Given the multidimensional nature of these constructs, the development of a valid and reliable instrument is
crucial to measure STEM-TPACK teaching and learning practice, professional development, attitude, and self-
efficacy among polytechnic lecturers. Previous studies have highlighted the importance of validating such
instruments to ensure that they accurately reflect the intended constructs (Akbar et al., 2023; Syariff, Fuad, Musa,
& Yusof, 2022; Zamanzadeh et al., 2015; Polit & Beck, 2006). Content validity focuses on the relevance,
representativeness, and clarity of instrument items as evaluated by experts, while face validity ensures that the
items appear appropriate and understandable to respondents (Yusoff, 2019; Abdullah et al., 2024). Lawshe’s
(1975) Content Validity Ratio (CVR) and Polit and Beck’s CVI are commonly applied to quantify expert
agreement on item relevance. These validation procedures are foundational to ensuring psychometric soundness
before conducting large-scale reliability and construct validity assessments.
Despite the growing research on TPACK and STEM education, limited empirical evidence exists on the
validation of instruments that simultaneously assess STEM-TPACK teaching and learning practice es,
professional development, attitude, and self-efficacy, particularly within the Malaysian polytechnic context.
Most prior studies have adapted TPACK scales designed for school teachers or general education settings
without proper contextual validation for tertiary-level STEM teaching (Li & Nugraha, 2025; Abdullah et al.,
2024; Shafie, Abd Majid, & Ismail, 2024; Li & Noori, 2023). Therefore, establishing both content and face
validity of the proposed instrument is a critical step to ensure it accurately captures the unique teaching context
of polytechnic lecturers. This study addresses that gap by systematically evaluating the content and face validity
of the developed instrument, contributing to the body of knowledge on STEM-TPACK measurement and
providing a foundation for future empirical investigations into factors influencing effective STEM teaching in
higher education.
Research Objectives
The following objectives guide this research:
1. Evaluating the face validity of the STPAS-I instrument using FVI analysis
2. Evaluating the content validity of the STPAS-I instrument using CVR analysis
3. Evaluating the content validity of the STPAS-I instrument using CVI analysis
RESEARCH METHODS
Research Design and Instrument
This study employed a quantitative descriptive design focused on expert evaluation to establish the content
validity and face validity of the developed instrument, the STPAS-I instrument (STEM-TPACK Teaching and
Learning Practice, Professional Development, Attitude, and Self-Efficacy Instrument ). The validation process
aimed to ensure that the instrument items were relevant, representative, and clearly formulated to measure the
intended constructs among polytechnic lecturers. The STPAS-I instrument was designed to measure four primary
constructs: (i) STEM-TPACK Teaching and Learning Practice, (ii) Professional Development, (iii) Attitude, and
(iv) Self-Efficacy. Each construct was developed based on an extensive literature review of existing models,
including the TPACK framework (Mishra & Koehler, 2006), Bandura’s Self-Efficacy Theory (1997), and
established scales regarding professional development, teacher attitudes toward STEM education, and self-
efficacy.
The initial instrument comprised a total of 69 items categorized into four constructs: STEM-TPACK Teaching
and Learning Practice (26 items), Professional Development (16 items), Attitude (11 items), and Self-Efficacy
(16 items). All items were rated on a five-point Likert scale ranging from 1 (Strongly Disagree) to 5 (Strongly
Page 22
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
Agree). The draft instrument was reviewed by experts for validation prior to pilot testing. A total of seven experts
were purposively selected to evaluate the instrument. The panel comprised subject-matter experts in STEM
education, educational measurement, and curriculum design from polytechnics. Each expert had more than ten
years of experience in higher education or research related to STEM teaching and psychometric evaluation.
Face and Content Validity Procedure
The validation process involved two main stages: content validity and face validity, conducted with the
participation of seven expert reviewers. The primary objective was to ensure that all items in the STPAS-I
instrument were relevant, clear, and capable of accurately representing the intended constructs of STEM-TPACK
Teaching and Learning Practice, Professional Development, Attitude, and Self-Efficacy among polytechnic
lecturers.
The Face Validity Index (FVI) was used to assess the clarity and comprehensibility of the questionnaire items.
The criteria for face validity are based on Oluwatayo (2012), which include the acceptability of the instrument
format, clarity of instructions, appropriateness of wording, suitability of font, correctness of spelling, proper
grammar, and the use of relevant terminology. Each was rated on a 4-point Likert scale (strongly disagree=1,
disagree=2, agree=3, strongly agree=4), which was dichotomized for analysis: ratings of 3 or 4 were recoded as
“1,and ratings of 1 or 2 as “0.Four indices were computed: Item-Level Face Validity Index (I-FVI), Scale-
Level Face Validity Index (S-FVI), Scale-Level Face Validity Index with Universal Agreement (S-FVI/UA),
and Scale-Level Face Validity Index Averaging method (S-FVI/Ave). The I-FVI was calculated as the proportion
of raters who rated an item as clear divided by the total number of raters. The S-FVI/Ave was derived by
averaging I-FVI scores across all items and by averaging clarity ratings across raters, adapted from the CVI
calculation method. The S-FVI/UA represented the proportion of items with 100% agreement among raters on
clarity. Interpretation of FVI values followed the CVI benchmark; an acceptable level is 0.80, and for panels
exceeding five raters, a value of ≥ 0.83 is considered satisfactory.
To establish content validity, the expert panel evaluated each questionnaire item for its relevance on a 4-point
Likert scale (Not relevant=1, Less relevant=2, Relevant=3, highly relevant=4), which was dichotomized for
analysis: ratings of 3 or 4 were recoded as “1” and ratings 1 or 2 as “0.” The analysis applied both the CVI and
the CVR. The I-CVI was calculated as the number of experts rating an item relevant divided by the total number
of experts. The S-CVI/Ave was obtained by averaging the I-CVI values across all items, and the S-CVI/UA
represented the proportion of items with universal expert agreement. According to Polit and Beck (2006), an I-
CVI value of 0.83 or higher is acceptable when seven or more experts are involved. The CVR was calculated
using Lawshe’s (1975) formula, which reflects the extent of agreement among experts on whether each item is
essential.
CVR=


,
where
ne = number of experts who rated the item 3 or 4
N = total number of experts
RESULTS
Objective 1 - Evaluating the face validity of the STPAS-I Instrument using FVI
The results of the face validity analysis are presented in Table 1. Seven experts assessed the clarity and
comprehensibility of seven questionnaire items. Six items achieved full agreement, each with an I-FVI of 1.00.
One item, "The terminology used is appropriate," had a consensus from six of seven experts, resulting in an I-
FVI of 0.86. The S-FVI/UA was 0.86, while the S-FVI/Ave was 0.98, both exceeding the acceptable threshold
of 0.83 (Polit & Beck, 2006). These results demonstrate the instrument’s strong clarity and comprehensibility.
Minor wording refinements were incorporated based on expert feedback, with no major revisions necessary.
Page 23
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
Table I Face Validity Analysis Findings
Objective 2 - Evaluating the content validity of the STPAS-I Instrument using Content Validity Ratio
(CVR)
Using Lawshe’s (1975) formula, seven experts rated the essentiality of all 69 items across the four constructs:
STEM-TPACK Teaching and Learning Practice (ST), Professional Development (P), Attitude(A), and Self-
Efficacy(S). Most items had CVR values between 0.71 and 1.00, with 48 items receiving a perfect agreement
(CVR = 1.00), and 8 items at 0.71. Although Lawshe’s critical value is 0.99 for seven experts, literature supports
retention of items with CVR 0.71 (Wilson et al., 2012; Ayre & Scally, 2014). No items fell below this threshold,
affirming satisfactory content validity across constructs. Table 2 summarizes the CVR results and item decisions.
TABLE 2 Average Cvr Values For Constructs Stem-Tpack Teaching Practice (St), Professional
Development(P), Attitudes (A), And Self-Efficacy (S)
CVR
Item
Total
Decision
1.00
ST1, ST2, ST3, ST4, ST6, ST8, ST9, ST10, ST11, ST12, ST13, ST14, ST15,
ST17, ST20, ST21, ST24, ST25, ST26, P1, P2, P3,A1, A2, A3, A4, A5, A6,
A7, A8, A9, A10, A11, S1, S2, S3, S4, S5, S6, S8, S9, S10, S11, S12, S13,
S14, S15, S16
48
Retained
0.71
ST5, ST7, ST16, ST18, ST19, P5, P9, S7
8
Retained
Objective 3 - Evaluating the content validity of the STPAS-I Instrument using Content Validity Index
(CVI) analysis
The results of the CVI analysis, as summarized in Table 3, indicate that all I-CVI values ranged from 0.86 to
1.00, which exceeded the acceptable threshold of 0.83 recommended by Polit and Beck (2006) for seven or more
experts. This suggests a strong level of agreement among the expert panel regarding the relevance of all items.
Specifically, within the STEM-TPACK Teaching and Learning Practice construct, I-CVI values ranged from
0.86 to 1.00, with an S-CVI/Ave of 0.96. The Professional Development construct achieved I-CVI values
between 0.86 and 1.00, yielding an S-CVI/Ave of 0.98. All items under the Attitude construct demonstrated
perfect agreement among experts, with an I-CVI of 1.00 and an S-CVI/Ave of 1.00. Similarly, the Self-Efficacy
construct recorded I-CVI values from 0.86 to 1.00, resulting in an S-CVI/Ave of 0.99. Overall, the findings
confirm that all items demonstrated satisfactory content validity, exceeding the recommended cut-off values.
Therefore, no items were eliminated from the STPAS-I instrument based on CVI analysis. These results indicate
that the instrument possesses a high level of content representativeness and relevance across all constructs.
Page 24
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
TABLE 3 I-Cvi And S-Cvi Values For Constructs For All Constructs
Professional Development
Attitude
Self-efficacy
ITEM
i-CVI
ITEM
I-CVI
ITEM
I-CVI
ITEM
I-CVI
ST1
1.00
P1
1.00
A1
1.00
S1
1.00
ST2
1.00
P2
1.00
A2
1.00
S2
1.00
ST3
1.00
P3
1.00
A3
1.00
S3
1.00
ST4
1.00
P4
1.00
A4
1.00
S4
1.00
ST5
0.86
P5
0.86
A5
1.00
S5
1.00
ST6
1.00
P6
1.00
A6
1.00
S6
1.00
ST7
0.86
P7
1.00
A7
1.00
S7
0.86
ST8
1.00
P8
1.00
A8
1.00
S8
1.00
ST9
1.00
P9
0.86
A9
1.00
S9
1.00
ST10
1.00
P10
1.00
A10
1.00
S10
1.00
ST11
1.00
P11
1.00
A11
1.00
S11
1.00
ST12
1.00
P12
1.00
S12
1.00
ST13
1.00
P13
1.00
S13
1.00
ST14
1.00
P14
1.00
S14
1.00
ST15
1.00
P15
1.00
S15
1.00
ST16
0.86
P16
1.00
S16
1.00
ST17
1.00
ST18
0.86
ST19
0.86
ST20
1.00
ST21
1.00
ST22
0.86
ST23
0.86
ST24
1.00
ST25
1.00
ST26
1.00
S-CVI
0.96
S-CVI
0.98
S-CVI
1
S-CVI
0.99
DISCUSSION
This study aimed to examine the validity evidence of the STPAS-I instrument, focusing on both face validity
and content validity as part of the preliminary validation phase. The findings demonstrated strong expert
agreement, indicating that the items were clear, relevant, and representative of their respective constructs. The
FVI results confirmed that the items were perceived as clear and comprehensible by the panel of experts. The I-
FVI values ranged from 0.86 to 1.00, while the S-FVI/Ave achieved 0.99, exceeding the acceptable threshold of
0.83 for seven or more experts (Polit & Beck, 2006). These findings suggest that the experts found the
questionnaire’s structure, language, and layout appropriate and user-friendly. The high level of clarity across
items indicates that respondents are likely to interpret the statements consistently, minimizing the risk of
ambiguity or misinterpretation during data collection. This result aligns with previous studies and recent research
highlighting the importance of face validity in ensuring item clarity and functional readability for respondents
(Ibrahim & Mohd Matore, 2024; Karami, Parra-Martinez, Ghahremani, & Gentry, 2024; Zamanzadeh et al.,
2015)
Similarly, the CVI and CVR analyses provided robust evidence of item relevance and essentiality
(Ahmad Fakhrin & Idris, 2025). The I-CVI values ranged from 0.86 to 1.00, surpassing the minimum acceptable
standard of 0.83 (Polit & Beck, 2006). Scale-level indices also demonstrated strong consistency, with S-CVI/Ave
values of 0.96 for STEM-TPACK Teaching and Learning Practice, 0.98 for Professional Development, 1.00 for
Attitude, and 0.99 for Self-Efficacy. These values exceed the commonly accepted benchmark of 0.90 (Lynn,
1986), indicating a high degree of expert agreement on item relevance (Polit & Beck, 2006).
Page 25
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
The CVR analysis supported these results, showing that most items achieved the maximum value of 1.00,
reflecting unanimous expert agreement on their essentiality. A few items had slightly lower CVR values (0.71),
yet remained above the critical cut-off based on Lawshe’s (1975) formula for seven experts, justifying their
retention. This outcome is consistent with prior research in STEM-TPACK validation studies (Amatan, Han,
& Pang, 2021; Suryadi, 2022), where similar high CVI and CVR scores indicated strong construct alignment and
theoretical soundness.
CONCLUSION
This study assessed the face and content validity of the STPAS-I instrument, developed to measure STEM-
TPACK Teaching and Learning Practice, Professional Development, Attitude, and Self-Efficacy among
polytechnic lecturers. Findings revealed that the instrument possesses strong validity evidence. The FVI
indicated excellent clarity and comprehensibility, with I-FVI values ranging from 0.86 to 1.00 and an overall S-
FVI/Ave of 0.99, confirming that all items were clear and appropriate for the target respondents. Similarly, the
CVI and CVR values were high, with S-CVI/Ave ranging from 0.96 to 1.00, reflecting strong expert consensus
on item relevance and representativeness. These results demonstrate that the STPAS-I items effectively capture
the intended constructs within the STEM-TPACK framework. Overall, the STPAS-I shows excellent face and
content validity, establishing it as a reliable instrument for evaluating lecturers’ STEM-related competencies.
Further research is recommended to perform pilot testing and psychometric validation, including exploratory
and confirmatory factor analyses to establish the instrument’s construct validity and reliability for broader
application.
RECOMMENDATION
Based on the findings, it is recommended that the STPAS-I instrument be adopted for assessing polytechnic
lecturers’ competencies in STEM-TPACK teaching and learning practices, professional development, attitude,
and self-efficacy. Future studies should involve a larger and more diverse sample to further validate the construct
through exploratory and confirmatory factor analyses. In addition, reliability testing using Cronbach’s alpha or
composite reliability is suggested to strengthen the instrument’s psychometric properties. Continuous refinement
and adaptation of the instrument are also encouraged to ensure its applicability across different educational
contexts and disciplines.
ACKNOWLEDGMENT
The authors would like to express sincere appreciation to all expert panels for their valuable contributions in
evaluating the content and face validity of the STPAS-I instrument. Special thanks are also extended to the
polytechnic lecturers who participated in the study and to the institution for the continuous support throughout
this research.
REFERENCES
1. Abdullah, A. & Mahmud, S. N. D. (2024). Applying TPACK in STEM Education towards 21st Century:
Systematic Literature Review. International Journal of Academic Research in Progressive Education and
Development, 13(1). https://doi.org/10.6007/IJARPED/v13-i1/20667
2. Abdullah, N., Lim, J. M., & Sulaiman, N. (2024). Validating TPACK instruments for Malaysian STEM
tertiary educators: A contextual approach. International Journal of Educational Measurement, 45(2),
156172. https://edulearn.intelektual.org/index.php/EduLearn/article/view/21816/10690
3. Ahmad Fakhrin, K. A., & Idris, I. (2025). A CEFR-informed English test: How Content Validity Index
and Content Validity Ratio are used for content validation. EDUCATUM Journal of Social Sciences,
11(1). https://doi.org/10.37134/ejoss.vol11.1.7.2025
4. Azman, H. H., & Ibrahim, M. (2023). STEM education policies in Malaysia: Preparing polytechnic
graduates for IR 4.0. Asian Journal of Higher Education, 14(2), 120-135.
5. Akbar, Z., Khan, R. A., Khan, H. F., et al. (2023). Development and validation of an instrument to
measure the micro-learning environment of students (MLEM). BMC Medical Education, 23, Article 395.
https://doi.org/10.1186/s12909-023-04381-3
Page 26
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
6. Amatan, M. A., Han, C. G., & Pang, V. (2021). Kesahan kandungan soal selidik faktor konteks, input dan
proses terhadap penerimaan pelaksanaan elemen pendidikan STEM dalam pengajaran dan pembelajaran
guru-guru sekolah menengah menggunakan nisbah kesahan kandungan (CVR). International Journal of
Advanced Research in Future Ready Learning and Education, 23(1), 10-22.
https://doi.org/10.37934/frle.23.1.1022
7. Bandura, A. (1997). Self-efficacy: The exercise of control. Freeman.
8. Capraro, R. M., Capraro, M. M., & Morgan, J. R. (2021). Designing professional development for
integrated STEM teaching. Journal of STEM Teacher Education, 56(1), 45-61.
9. Chai, C. S., Jong, M. S.-Y., & Yan, Z. M. (2020). Surveying Chinese teachers’ technological pedagogical
STEM knowledge: A pilot validation of STEM-TPACK survey. International Journal of Mobile Learning
and Organisation, 14(2), 203-214. https://doi.org/10.1504/IJMLO.2020.106181
10. Chai, C. S., Rahmawati, Y., & Jong, M. S.-Y. (2020). Indonesian science, mathematics, and engineering
preservice teachers’ experiences in STEM-TPACK design-based learning. Sustainability, 12 (21),
Article 9050. https://doi.org/10.3390/su12219050
11. Cribbs, J., Duffin, L., & Day, M. (2022). Effectiveness of an inquiry‐focused professional development:
Secondary mathematics and science teachers’ beliefs and instruction. Journal of Research in STEM
Education, 8(2), 35-60. https://doi.org/10.51355/jstem.2022.110
12. Ibrahim, S. N. A., & Mohd Matore, M. E. E. (2024). Face validity assessment of Malaysian teachers’
global competency instrument using face validity index analysis from potential test-takers’ perspective.
International Journal of Learning, Teaching and Educational Research, 24(10), 23-(?) [no page span
given]. https://doi.org/10.26803/ijlter.24.10.23
13. Idris, R., & Bacotang, J. (2023). Exploring STEM education trends in Malaysia: Building a talent pool
for Industrial Revolution 4.0 and Society 5.0. International Journal of Academic Research in Progressive
Education and Development, 12(2). Retrieved from
https://ijarped.com/index.php/journal/article/view/1184
14. Karami, S., Parra-Martinez, A., Ghahremani, M., & Gentry, M. (2024). Development and Validation of
Perception of Wisdom Exploratory Rating Scale: An Instrument to Examine Teachers’ Perceptions of
Wisdom. Education Sciences, 14(5), 542. https://doi.org/10.3390/educsci14050542
15. Keetharuth, A., Brazier, J., Connell, J., Kharicha, K., & Forbes, A. (2018). The importance of content
and face validity in instrument development: lessons learnt from service users when developing the
Recovering Quality of Life measure (ReQoL). Quality of Life Research, 27(7), 18931902.
https://doi.org/10.1007/s11136-018-1847-y
16. Keetharuth, A., Brazier, J., Connell, J., Kharicha, K., & Forbes, A. (2018). The importance of content
and face validity in instrument development: lessons learnt from service users when developing the
Recovering Quality of Life measure (ReQoL). Quality of Life Research, 27(7), 18931902.
https://doi.org/10.1007/s11136-018-1847-y
17. Li, M., & Nugraha, M. G. (2025). Development and validation of the secondary mathematics teachers’
TPACK scale: A study in the Chinese context. International Electronic Journal of Mathematics
Education, 20(2), em0819. https://doi.org/10.29333/iejme/15934
18. Li, M., & Noori, A. Q. (2023). Development and validation of the secondary mathematics teachers’
TPACK scale: A study in the Chinese context. Eurasia Journal of Mathematics, Science and Technology
Education, 19(11), Article em2350. https://doi.org/10.29333/ejmste/13671
19. Mansour, N., Said, Z., & Abu-Tineh, A. (2024). Factors impacting science and mathematics teachers’
competencies and self-efficacy in TPACK for PBL and STEM. Eurasia Journal of Mathematics, Science
and Technology Education, 20(5), em2442. https://doi.org/10.29333/ejmste/14467
20. Mohamad Hasim, S., Rosli, R., Halim, L., Capraro, M. M., & Capraro, R. M. (2022). STEM Professional
Development Activities and Their Impact on Teacher Knowledge and Instructional Practices.
Mathematics, 10(7), 1109. https://doi.org/10.3390/math10071109
21. Ministry of Higher Education Malaysia. (2024, February 22). Higher education ministry to look into
establishing nation’s first AI polytechnic, says minister. Malay Mail. Retrieved from
https://www.malaymail.com/news/malaysia/2024/02/22/higher-education-ministry-to-look-into-
establishing-nations-first-ai-polytechnic-says-minister/119446
Page 27
www.rsisinternational.org
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XXVII November 2025 | Special issue
22. Napitupulu, M. H., Muddin, A., Bagiya, B., Diana, S., & Rosyidah, N. S. (2025). Teacher professional
development in the digital age: Strategies for integrating technology and pedagogy. International Journal
for Science Review, 2(4). https://doi.org/10.71364/ijfsr.v2i4.33
23. Omar, I. O. M. (2021). Integrating computer-related technology into instructional practice at a higher
learning institution in Malaysia. IIUM Journal of Educational Studies, 5(1).
https://doi.org/10.31436/ijes.v5i1.152
24. Polit, D. F., & Beck, C. T. (2006). The content validity index: are you sure you know what's being
reported? Critique and recommendations. Research in nursing & health, 29(5), 489497.
https://doi.org/10.1002/nur.20147
25. Setiawan, R., Wagiran, W., & Alsamiri, Y. (2024).
Construction of an instrument for evaluating the teaching process in higher education: Content and
construct validity. REID (Research and Evaluation in Education), 10(1), 50-63.
https://doi.org/10.21831/reid.v10i1.63483
26. Shafie, H., Abd Majid, F., & Ismail, I. S. (2024). Developing a 21st Century Technological Pedagogical
Content Knowledge (TPACK) Instrument: Content Validity and Reliability. International Journal of
Education, 14(3). https://doi.org/10.5296/ije.v14i3.19980
27. Sonsupap, K., Cojorn, K., & Sitti, S. (2024). The Effects of Teachers’ Technological Pedagogical Content
Knowledge (TPACK) on Students’ Scientific Competency. Journal of Education and Learning, 13(5),
91-104. https://doi.org/10.5539/jel.v13n5p91
28. Suryadi, D. (2022). Content validity for the research instrument regarding teaching methods of the basic
principles of bioethics. Jurnal Pendidikan Kedokteran Indonesia, 11(2), 96-105.
https://doi.org/10.22146/jpki.77062
29. Syariff M. F., Fuad, D. R., Musa, K., & Yusof, H. (2022). The development and validation of the principal
innovation leadership scale in Malaysian secondary schools. International Journal of Evaluation &
Research in Education (IJERE), 11(1), 193200. https://doi.org/10.11591/ijere.v11i1.22038
30. Tan, Z. H. H., Purwaningsih, E., Taqwa, M. R. A., Putri, F. D., & Kurniawan, F. (2024). Development
of electronic independent learning activity unit (E-ILAU) using project-based learning-STEM integrated
TPACK to improve higher-order thinking skills. Jurnal Pendidikan Fisika Indonesia, 20(1), 105123.
31. Thibaut, L., Ceuppens, S., De Loof, H., De Meester, J., Goovaerts, L., Struyf, A., Boeve-de Pauw, J.,
Dehaene, W., Deprez, J., De Cock, M., Hellinckx, L., Knipprath, H., Langie, G., Struyven, K., Van de
Velde, D., Van Petegem, P., & Depaepe, F. (2018). Integrated STEM education: A systematic review of
instructional practices in secondary education. European Journal of STEM Education, 3(1),
2. https://doi.org/10.20897/ejsteme/85525
32. Usman, O., Auliya, V., Susita, D., & Marsofiyati. (2022). TPACK (Technological Pedagogical Content
Knowledge) Influence on Teacher Self-Efficacy, and Perceived Usefulness, Ease of Use and Intention to
Use E-Learning Technology. Journal of Southeast Asian Research, 2022, Article ID 895111.
https://doi.org/10.5171/2022.895111
33. Wilson, C. D., Riggs, L., & Bohn, J. (2023). The effect of professional development on in-service STEM
teachers’ self-efficacy: A meta-analysis of experimental studies. International Journal of STEM
Education, 10, 37. https://doi.org/10.1186/s40594-023-00422-x
34. Yunus, H. M., & Joblie, F. S. M. H. (2022). Technology integration analysis among TVET lecturers in
Sarawak. Journal of Technology and Humanities, 3(1). https://doi.org/10.53797/jthkkss.v3i1.2.2022
35. Zaid, N. S., & Kamin, Y. (2023). Competency of TVET lecturers in digital and automation at public
higher education institutions in IR 4.0. Malaysian Journal of Social Sciences and Humanities, 9(11).
https://doi.org/10.47405/mjssh.v9i11.3096
36. Zamanzadeh, V., Ghahramanian, A., Rassouli, M., Abbaszadeh, A., Alavi-Majd, H., & Nikanfar, A. R.
(2015). Design and implementation content validity study: Development of an instrument for measuring
patient-centered communication. Journal of Caring Sciences, 4(2), 165-178. 45-58.
37. Zulnaidi, H., Abdul Rahim, S. S., & Mohd Salleh, U. K. (2020). The readiness of TVET lecturers in
facing the intelligence age IR4.0. Journal of Technical Education and Training, 12(3), 89-96.
https://publisher.uthm.edu.my/ojs/index.php/JTET/article/view/3969