Problem Statement
Assessment tools in academic discussions often exhibit a limited focus on specific language functions such as
clarifying, agreeing, disagreeing, and summarizing. This oversight results in incomplete evaluations of
students’ communicative abilities, as crucial aspects of interaction are neglected. Additionally, current
assessment methods show significant inconsistency when evaluating students' speech communication and
reasoning during academic conversations, leading to unreliable and potentially inaccurate outcomes (Furtak &
Ruiz-Primo, 2008; Bheda, 2022).
Furthermore, existing evaluation forms lack appropriately detailed descriptors, making it difficult for
instructors to effectively access and assess these language functions while simultaneously listening to ongoing
discussions. This gap creates challenges in real-time assessment, hindering educators’ ability to provide
accurate, immediate feedback. Addressing these issues requires the development of more focused, consistent,
and user-friendly assessment frameworks that facilitate the practical evaluation of language use in dynamic
academic settings (Furtak & Ruiz-Primo, 2008; Bheda, 2022).
Objectives
Below are the objectives of the innovation:
1. To develop an innovative evaluation tool that integrates language function assessment with
argumentation rubrics.
2. To address inconsistencies in the assessment of students’ language and argumentation skills in academic
discussions.
3. To create user friendly evaluation form to use during assessment.
METHODOLOGY
Product Description
During the Mock Meeting assessment for the English for Occupational Assessment, the Dialogue Dashboard
was created. The tool, which was created to assess students' speaking skills in a structured, real-world setting,
tackles issues with several initial rubrics that made it challenging and time-consuming to track language
functions during discussions (Politis et al., 2023; Tauchid, Putri, & Suwandi, 2024). The Dialogue Dashboard
provides a straightforward and effective way to monitor and evaluate language function use in simulated
meeting settings, which helps to expedite this process (Gao, Roever, & Lau, 2024). The invention is based on
the two-stage Mock Meeting Assessment, which consists of a group discussion and an individual presentation.
The Latter Stage is Where it is Most Important to Evaluate Students' Participation and use of Language
Functions. The Dialogue Dashboard offers Specialised Evaluation Forms with two Distinct but
Complementary Tables: Table 1 (The Dashboard Dialogue) for Systematic Tracking of Language Function use
and Table 2 (Comment Board) to Gather Qualitative Feedback on Student Performance. This is Because The
Previous Rubric Was General and Lacked Specificity in Certain Aspects (Politis Et Al., 2023). The Instructor
uses these tables Simultaneously During Tests for Different but Related Reasons, Which Improves the
Evaluation Process's Efficacy, Fairness, and Clarity (Zhai & Wibowo, 2023).
METHODOLOGY
The Dialogue Dashboard was developed and implemented in the English for Occupational Purposes course to
improve the way students' speaking abilities were evaluated during the Mock Meeting evaluation, which is a
structured performance-based assessment that mimics professional communication in the real world. The
Dashboard Dialogue and the Comment Board are two separate but complementary assessment tools that were
developed as part of the methodology. A systematic table that recorded language functions—like clarifying,
agreeing, or summarizing—those students used during group discussions made up the Dashboard Dialogue,