International Journal of Research and Innovation in Social Science

Submission Deadline- 11th September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-03rd October 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Creative Music Technology in Muyahayo of Rodat Ensemble

  • Shafiq Azhar Shahrizal
  • Mohd. Azam Sulong
  • 4706-4722
  • Jul 17, 2025
  • Education

Creative Music Technology in Muyahayo of Rodat Ensemble

Shafiq Azhar Shahrizal*, Mohd. Azam Sulong

University Pendidikan Sultan Idris

*Corresponding author

DOI: https://dx.doi.org/10.47772/IJRISS.2025.906000358

Received: 16 June 2025; Accepted: 25 June 2025; Published: 17 July 2025

ABSTRACT

This research explores the integration of Rodat Ensemble, a traditional Terengganu Malaysian art music, with modern music production methods to revive interest, particularly among the younger generations, and preserve the cultural heritage. Motivated by the declining popularity of Rodat music, the research aims to innovate and modernize Rodat Ensemble using new audio processing software and production methods while upholding its authenticity. This study employs a practice-based research approach with research and creative work integration through music output. Using advanced Artificial Intelligence enabled tools such as Musicfy and Lalal.ai, this research examines and processes Rodat performances. The research identifies Muyahayo vocals, one of the Rodat Ensemble songs, possesss microtonal qualities and are more laid-back in terms of timing compared to those used in pop vocals. The research also produced a song to illustrate how these qualities are utilized with state-of-the-art technology.

Keywords: Rodat Ensemble, Muyahayo, Artificial Intelligence, microtonal, technology

INTRODUCTION

Perspective

Subjectivity in music is a curious thing and does much to illustrate the rich and extremely personal quality of taste. One may discover that the thing inspiring one does not inspire another, and it is in this unpredictability that we find the richness and diversity in the world of music. Taste in music is governed by any number of different things–by cultural background, or background in general, or by emotional connotations.

In this research, I will mainly be using pragmatism as a research paradigm and a mixed-methods approach. Subjectivism and objectivism research paradigms would also find their application in parts of the research.

Inspiration and Motivation

My experience in primary school revolved around the discovery of new sound elements and musical styles, an engagement that proved to be invaluable to my personal growth. Memorizing the words to my favorite songs went beyond a form of entertainment; it allowed for a more intense level of engagement with the music, nurturing a heightened sensitivity to its nuances and the sentiments conveyed through the composition itself. This theme resurfaced when I attended secondary school and a whole new world of sound converged through exposure to a wide variety of musical genres, ranging from rock and pop music to hip-hop, classical, and jazz, each with its distinct rhythms, stories, and cultural heritage.  However, there was one sound that really stood out, which gave a true feeling of magic and enchantment. It was none other than the traditional music of Malaysia. The richness conveyed through the vocal tone, coupled with the percussive instrumental accompaniment infused with melodic content, touched me deeply upon initial exposure during an international cultural festival. This served as the catalyst for the research study being undertaken. I then started to explore ways in which such traditional art forms could be preserved, transformed, and enhanced through modern music technology.

During a recent field visit to Terengganu undertaken as preparatory work towards an interview with the Jabatan Kebudayaan & Kesenian Negara Terengganu (JKKN), the director expressed his concern about the threatened status of Rodat music, fast approaching the point of extinction. He suggested the merging of Rodat with modern music technology as a method to appeal to new-generation audiences and, ultimately, to preserve this vital cultural heritage. This experience shed light upon a potential direction for my thesis. The core concern of my research resides in balancing the raw emotional content of the Rodat Ensemble’s vocal work with its percussive accompaniment against the background of the precision and flexibility provided through modern music production methods. Rodat Ensemble’s vocals are unique in such a way that most of its melody embellishments and ornamentations are free flowing and mostly micro-tonal. With the rapid development of technology in music, I have been curious as to what methods can utilize its sound. This interest is not just about the richness behind the music of Rodat Ensemble but also about how it can revolutionize the way we make music today.

Muyahayo of Rodat Ensemble

Rodat Ensemble is a traditional art form from the state of Terengganu which was very popular amongst the people a while ago. Renowned for its dynamic and vibrant presentations, the ensemble consists of three musical elements: vocals, dance, and an accompanying instrumental by tar Rodat. The main theme for the lyrical message of Rodat includes Islamic prayers praising Allah & Prophet Muhammad S.A.W (Nasuruddin, 1991). Rodat Ensemble is usually performed for social gatherings such as weddings, new year celebrations, and other festivals held by the society. With a commitment to preserving and promoting Malaysia’s rich musical traditions, Rodat Ensemble stands as a symbol of artistic excellence and cultural pride in the region. Although prominent during the 20th century, Rodat Ensemble is currently facing a decline in popularity especially within the younger generation. A typical set of Rodat Ensemble performance consists of the popular songs such as Muyahayo and Sollu Muhammad.

For this paper, Muyahayo will be featured. This piece happens to be the most iconic song for Rodat Ensemble. There is a version of it on Youtube (Awang, 2021) which was recorded in a studio. The vocal melody and lyrics combined with the tar are very raw and tribal-esque. These characteristics are well suited as a sample for my research.

Music Technology Imagination Ideas

The expanse of music technology serves as a canvas for the imagination. Envisioning the integration of Rodat Ensemble, such as Rodat Ensemble, with state-of-the-art music technology tools unveils a plethora of sonic possibilities. The idea is to utilize the unique characteristics of Rodat Ensemble into contemporary music production workflows in such a way that still respects its foundations.

Immersed in this creative process, I anticipate utilizing state-of-the-art audio processing techniques and the use of the latest developments in artificial intelligence (AI) in music production to capture and alter the nuances of Rodat Ensemble recordings whilst crafting a composite of sounds that surpass the confines of standard production methods. This chapter is not only an exploration of these technicalities but also a homage to the creative fusion of historical art with technological advancement.

Creative Work Objectives

The objective of this research is to develop an integrated approach that allows the unification of Rodat Ensemble with contemporary music technology production. This research aims to:

  • Identify the technological problems that exist in digitizing Rodat Ensemble.
  • Creatively utilize the innovative techniques and tools to accommodate the technological problems of digitizing Rodat Ensemble.

LITERATURE REVIEW

Technology in Music

Technology revolutionized the world of music production and listening over the years. From the advent and utilization of recording and playback devices to the current era of digital audio workstations and advanced software synthesizers, music and technology are increasingly intertwined. Advances in digital instruments, sampling techniques and computer music composition widened the sonic possibilities available to the artist. Moreover, technology has revolutionized the way music is distributed and consumed, with streaming platforms enabling global access to a vast array of musical genres. While technology is providing us with unprecedented creative potential in the domain of music production and performance, concerns have also emerged in the guise of the authenticity issue in digitally mediated performance and the homogenization of music styles through overuse of the same style across the world. The meeting point between music and technology is a rapidly evolving field continuing to shape the way the artist creates, innovates and communicates with the world audience.

Digital Audio Workstation

There are several Digital Audio Workstations (DAW) used in music today. They each carry slight benefits and drawbacks yet as a whole serve the same purpose: music production and mixing. In this research, I’ll focus primarily on FL Studio created by Image-Line. FL Studio helped spearhead the industry in innovation and accessibility through its user-friendly interface and large feature list (Senior, 2018). Its built-in plugins and third-party usage grant users further ability in sound design and audio processing tasks. Researchers also studied the role played by FL Studio in high-quality sound production (Zagorski-Thomas, 2014).

Audio Processing & Effects

One of the highlights of my research is integrating musical Artificial Intelligence (AI) amongst other audio effects with the elements of Rodat Ensemble. AI technology has advanced to the point that it can now create, compose, and improve musical works that were previously performed by humans. AI music may help with several areas of music creation; it can write parts of a song, mix and master it, create voice clones, and much more. (Smith-Muller, 2023)

Several AI audio processors such as Musicfy (Musicfy Inc, 2023), LALAL.AI (Omnisale GmbH, 2024) and Synplant 2 (NuEdge Development, 2024), which I will be using in this research are described below.

A breakthrough in audio processing has brought us an AI software called Musicfy. It essentially transforms any recorded sample into a sound of your own choosing. For example, after building a sound model of a saxophone in the program, a vocal melody can then be recorded and altered into basically a realistic sound of the pre-built saxophone model. Nuances like the pitch, intonation and timing are perfectly captured.

Additionally, LALAL.AI can be used to separate and isolate the original performance recordings of Rodat Ensemble which exist in Youtube into high quality vocal and instrumental stems. This mitigates the process of having to record the performance from scratch. This tool is particularly useful for musicians, producers, and audio enthusiasts, offering a convenient way to extract vocals for remixing, karaoke creation, or instrumental production. LALAL.AI’s technology represents a significant advancement in the field of audio processing, providing a user-friendly solution for manipulating and exploring the individual elements within mixed music tracks.

Synplant is a unique software synthesizer developed by Sonic Charge. Synplant breaks the mold both in the sound generation and the user interaction with the sound compared to conventional synthesizers. Built upon the plant-growing metaphor, the user doesn’t work directly with typical synthesizer parameters (i.e., filters and oscillators), but plants seeds and nurtures virtual plants to produce sound. The sound in Synplant is generated by genetic algorithms that mutate and change based upon the user input. This organic approach facilitates unexpected and dynamic sound textures and Synplant is therefore a potent sound design and experimental instrument.

Mixing / Mastering

The mastering process is the final stage where the final mix of a single or album is prepared and refined to disseminate to the general public for playback. Mastering is done to make the music sound professional and consistent across different playback media.

One such music mastering software is Ozone 11 (Izotope, 2024). Ozone 11 is an audio mastering plug-in suite developed by the renowned iZotope company, synonymous with its powerful tools and high-end functionalities aimed at guiding producers and engineers to produce sounding masters professionally.Master Assistant in Ozone 11 analyzes the audio material and gives initial settings starting points based on the intensity targeted (e.g., streaming, CD, reference) and sound signature desired (e.g., warm, balanced, vintage). Although this piece of software helps to show the general direction in the mastering stage, it is still best for the engineer to manually tweak the parameters such as dynamics and imagers. If I were to use this plugin, I will need to improvise from the given template to fit the sound I desire.

Rodat Ensemble

Rodat Ensemble is a traditional art form from the state of Terengganu that was very popular among locals in the past. The ensemble is characterized by lively and energetic performances that constitute three basic music elements: vocal aspects, choreography, and instrumental accompaniment by the tar. The main theme for the lyrical message of Rodat includes Islamic prayers praising Allah & Prophet Muhammad S.A.W (Nasuruddin, 1991). Rodat Ensemble is usually performed for social gatherings such as weddings, new year celebrations, and other festivals held by the society. With a commitment to preserving and promoting Malaysia’s rich musical traditions, the Rodat Ensemble stands as a symbol of artistic excellence and cultural pride in the region. Although prominent during the 20th century, Rodat Ensemble is currently facing a decline in popularity especially within the younger generation. The most iconic song for Rodat Ensemble is Muyahayo. The version on Youtube (Awang, 2021) is recorded in a studio. The vocal melody and lyrics combined with the tar are very raw and tribal-esque. Samples from this song will be used for my own creative works.

Theoretical Concepts of Artificial Intelligence

Theoretical models of Artificial Intelligence (AI) music look at the potential applications of AI technologies and interpret and use them from a scholarly perspective. They help account for where the convergence between music creation, perception, and interaction and the use of AI occurs and outline the way that music is affected by AI as both business and art form. There are several significant theoretical models.

Algorithmic Composition:

Algorithmic composition refers to the application of algorithms—precepts or processes—to produce music autonomously. Algorithmic composition can also be viewed as a theoretical exploration of the ways in which the use of artificial intelligence can copy or extend human creativity. Algorithmic composition challenges the classical notions of creativity and authorship and asks us if music created through the use of computer algorithms can involve being “original” or being “artistic” (Fernández & Vico, 2013).

Machine Learning and Neural Networks:

Machine learning music involves the training of models such as neural networks using large quantities of available music to learn styles, patterns, and forms. The learned skills are used by the models to generate new music or value music production. The concept formulates questions about the use of data-driven processes in music, namely the representation, acquisition, and production of musical knowledge through the assistance of machines. The application of the use of neural networks complicates the ability of the computer to mimic cognitive and creative processes (Fiebrink & Caramiaux, 2016).

Creativity and Co-Creation:

AI creativity is concerned with the way the use of AI can produce music either autonomously or assist human creators in the production of new work. The topic touches upon the boundaries of philosophical debates surrounding creativity itself. Is it possible that a machine can be creative? How does music produced with the use of AI reconfigure the role of the composer and the performer? How does music produced with the assistance of AI redefine artistic agency and co-authorship? (Newman, Morris & Lee, 2023).

Style Imitation and Transfer:

Style imitation is where the models can replicate music in the style of certain composers, styles, or eras. Style transfer is where the model draws musical elements from a style and applies them to a new style to generate hybrids. This also calls to mind the authenticity of style within music. Can the model understand music context and intent as well as humans? How does the model copying styles challenge our concept of originality and contributions in terms of the music historical context? (Bougueng, Ens & Pasquier, 2025).

Generative Models:

Generative models such as Generative Adversarial Networks and Variational Autoencoders are used to generate new music through the process of learning from existing datasets and generating new content based on the learned patterns. The concept is also the focus of the debate surrounding the limits of creativity provided by the use of AI and the ability of the latter to bring about something truly new or if it reorganizes existing patterns. Use of generative models in music also draws attention towards the potential for innovation in music theory in the future (Hernandez-Olivan & Beltran, 2021).

Human-AI Interaction in Music:

This concept involves the study of human-AI interaction in music performance, music composition, and music production. The AI can serve as a controller or co-contributor to music production processes. Human-AI interaction also concerns controllership, self-determination, and agency. How far can creativity in the artistic process belong to the role played by the AI versus human contribution? The concept further discusses the application of AI in music performance and improvisation where the AI responds dynamically to human input in real-time (Nair, 2025).

Cognitive Modeling of Music Perception:

AI can also be used to model human perception and processing of music using computational models to replicate aspects of musical cognition such as the recognition of melody, rhythmic processing, and emotional response to music. This concept bridges the field of AI with music psychology and cognitive science to examine the way in which computer models replicate or differ from human musical experience. This then raises the issue as to the nature of musical understanding and emotional engagement if viewed through the filter of algorithms.

Ethics and Intellectual Property:

Authorship, intellectual property rights, and originality are challenged with music generated through AI. AI music disrupts the demarcating line between human and technological contributions to artistry. Music applications using AI require a redefinition of the law of possession and copyrighting. Should AI creations receive copyright protection? And if they are to receive protection, does the programmer, user, or computer own the rights? There are also ethical issues concerning the displacement of human musicians in music production in the business world (Deltorn & Macrez, 2018).

Emotional and Aesthetic Response:

This concept explores the emotional and aesthetic effect of AI-generated music on people. Can AI create music that emotionally affects people in the same way that human-made music does? This brings up questions about the application of emotional intelligence in music composition and whether or not AI can understand or replicate the intricacy of human emotion. The concept also pertains to aesthetics, challenging traditional ideas about beauty, taste, and meaning in music.

Cultural and Social Implications:

There are also social and cultural implications of AI music, such as the way it transforms music production, dissemination, and listening. This music production democratization through the use of AI also raises concerns over accessibility and the professionalization of musicians. The use of AI to determine popular music trends also impacts the world at large and the issue related to homogenization or eroded musical diversity in music.

Frameworks on How Artificial Intelligence is Revolutionizing the Music Industry

In order to create my own framework on why I have used AI in my research, here are a few references regarding that matter. In a blog post by FasterCapital, how AI is revolutionizing the music industry is discussed (FasterCapital, 2024). They have come up with such a framework as in Figure 2.1 and Figure 2.2 below.

Figure 1: How AI is Revolutionizing the Music and Sound Effects Industry

In summary, Figure 1 demonstrates that artificial intelligence transforms music and sound creation by efficiently analyzing large libraries, thus enabling the quick choosing of elements. The technology stimulates creativity by producing unique compositions and effects, offers cost-effective solutions, saves time, democratizes music production, and facilitates real-time collaboration between artists.

Figure 2 demonstrates the diversity of artificial intelligence in music with its applications across various genres. In classical music, the style of Mozart and Beethoven is replicated using AI algorithms to produce symphony music that draws positive reviews. In pop music, AI analyzes current trends to produce catchy melodies and lyrics and eventually gets chart-topping tracks. In experimental music, AI pushes the boundaries of creativity with music that defies the norms of rhythm and harmony and engages avant-garde listeners with its new sonic explorations.

Figure 2: From Classical to Experimental Genres

After careful analysis of the frameworks above, I have produced my own framework as in Figure 3.

 

Figure 3: My Research Framework

My framework illustrates how AI transforms Rodat Ensemble into an innovative form called Modern Rodat. Starting from micro-tonal and temporal elements of traditional music, AI enhances the process through:

  • Machine-learning and Neural Networks
  • Creativity and co-creation
  • Style imitation and transfer
  • Generative models
  • Human-AI interaction
  • Cultural and social implications

AI acts as a powerful bridge, blending cultural heritage with modern tools. The result is a new form of Rodat that hopefully is:

  • Unique in sound
  • Effective in appeal
  • Faithful to original sound

In essence, AI empowers the evolution of traditional music into new, expressive, and culturally respectful forms.

Creative Artistic Process

Introduction

Pragmatism has provided the overall research framework in this study through a mixed-methods approach adopting a combination of practice-based, practice-led, and qualitative methods.

Pragmatism is a philosophical approach where the importance of practical consequences and the utility of actions, notions, and beliefs are considered the main standards in evaluating their value and accuracy. Unlike a focus on theoretical abstraction, pragmatism is more interested in the application of theories and notions to the solution of actual issues. This perspective emphasizes the imperative need for the practical application of research and stimulates a range of problem-solving methods (Dewey, 1916).

Vear (2022) suggested the idea that a practice-based methodology creates a framework where the production or realization of artistic, design-focused, or practical work is viewed as a legitimate research outcome. This methodology includes the research process and the corresponding creative or practical work outputs. The emphasis is placed upon the process of creating a work of art, a design model, a performance, or a practical intervention, geared towards creating new knowledge through experiential engagement.

According to McNamara (2012), the practice-led method is a conceptual framework through which the artistic practices, methodologies, and products of the researcher are incorporated into the research design and subsequent findings. Like the practice-led methodology, the method also recognizes the importance of understanding through theory to further and guide creative or practical work. In the case of practice-led research, theoretical frameworks guide practical or creative work while the products serve to further theoretical understanding in the discipline.

In my research activities, this approach is persistently used in my artistic work through the production of new music compositions that demonstrate the successful incorporation of the Rodat Ensemble into different music production methods.

Samples

In this research, I have chosen Rodat Ensemble songs such as Muyahayo and Sollu Muhammad as my main samples. Other samples such as India by Dato’ Siti Nurhaliza and Thinking Out Loud by Ed Sheeran were also used. Convenience and judgemental sampling were used in the selection of these samples as Muyahayo and Sollu Muhammad were readily available online whereas Cindai and Thinking Out Loud were deemed by myself as the most suitable samples to showcase the difference between traditional and pop music.

Artistic Process

For the first phase, I started by comparing Muyahayo’s vocal sample with a pop vocalist, Ed Sheeran. For this, LALAL.AI (Figure 4) is used to separate the original Muyahayo recording from Youtube into vocal and instrumental stems as there are no recorded vocal stems available on the internet unlike most pop songs.

Figure 4: LALAL.AI

The stems were then imported into FL Studio (Image‑Line, 2024), my DAW of choice, with a bridged instance of Revoice Pro (Synchro Arts, 2024) loaded. A side-by-side comparison between Muyahayo and Ed Sheeran’s vocal pitch changes is made. Muyahayo vocals were compared to another vocalist of a similar genre, namely Dato’ Siti Nurhaliza with her traditional Malay song Cindai (SuriaRecords, 2013). The beat quantization and tempo changes of Muyahayo were compared to Ed Sheeran’s songs.

Musicfy (Figure 5) was used to alter the samples into different sounds of choice from pre-trained database models. Not limited to vocals, I also altered the timbre of the percussive samples from the instrumental stems.

Figure 5: Musicfy

Samples were loaded into Synplant 2 (Figure 6), which is a synthesizer that will analyze and recreate directly from the sample’s genetic code, which can then be further modified through a blend of subtractive and FM synthesis techniques.

Figure 6: Synplant 2

Instruments and various elements through plugins such as Kontakt Session Strings 2 (Native Instruments, 2024) as in Figure 7 and Serum were then introduced towards a creative direction. Effects such as delays, reverbs (Figure 8) and other modulators were used where necessary. Any self-recorded foleys and samples were also added to further enhance the soundscape.

Figure 7: Kontakt Session Strings 2

Figure 8: Valhalla Vintage Verb

The goal was to create a piece of music to showcase the creative fusion of Rodat Ensemble with modern music production techniques.

RESEARCH FINDINGS

Technological Problems in Digitizing Rodat Ensemble

For the first objective, I intended to find out the technological problems in digitizing Rodat Ensemble. After separation of the vocal and instrumental stems of Muyahayo, Thinking Out Loud and Cindai through LALAL.AI, Revoice Pro was used for analysis as in Figure 9.

Figure 9: Comparison of Muyahayo, Thinking Out Loud and Cindai

After the comparison, it was found that Muyahayo and Dato’ Siti’s vocals consisted of very relaxed tuning while Ed Sheeran’s vocals were more of a perfect and “auto-tuned” nature. As shown in Figure 9, the traditional vocals are regularly off pitched. However, this is not detrimental to the song but in fact a strength in traditional vocal performance. This is especially visible during the vocal ornamentation performed by the traditional singers. On the contrary, Ed Sheeran’s vocals as in most pop songs, are tuned almost perfectly in the middle of each note as shown in Figure 10 below. This is how we listen to western pop music today.

Figure 10: Comparison of pitch

From these results, it can be concluded that the sounds of Muyahayo are micro-tonal in nature in their melodic composition. These minute pitch variations are similar to the performance of fretless string instruments, for example, the violin.

Along with highlighting micro-tonal details, Muyahayo singing also seems to illustrate a more relaxed style in the way each syllable is timed. This is the result when intentional vocal performance is either slightly ahead of or slightly behind the underlying rhythm or pulse of the music. This technique creates a relaxed, conversational atmosphere commonly inducing feelings of closeness or emotional understanding. By not adhering strictly to tempo, the singer provides more freedom in the way the music is presented and this acts to increase the expressiveness overall of the performance.

In general, the main concerns about the digitization of the Rodat Ensemble involve the micro-tonal and temporal differences that characterize the samples.

Creative Utilization of Innovative Techniques and Tools

The first song (S1) is an experimental fusion of Rodat traditional sounds and the high-intensity domain of Electronic Dance Music (EDM). Merging authentic tar rhythms and vocal motifs from Rodat performances into a modern EDM framework, the composition synthesizes a familiar sonic space between cultural tradition and modern club-friendly landscapes. Consisting of driving beats, layered synths, and dynamic drops, the aim of the project is to preserve Rodat’s legacy and take it to new horizons—where tradition meets transformation on the dance floor.

For the main melody line, I cleaned up interview samples between Assoc. Prof. Mohd. Azam Sulong and Pok Jak from background noises using LALAL.AI (Figure 11) and spliced it into creative quotes to fit the melody and rhythm throughout the song (Figure 12).

Figure 11: Cleaning up samples in LALAL.AI

Figure 12: Splicing of interview samples into creative quotes

I later added processes such as compression to control the dynamic range, bit crush for a lossy vintage sound, equalizers (EQ) to shape the overall sound and reverb to create a sense of space. For EQ, I used Fabfilter’s Pro Q3 (Figure 13) with a low cut band set at 140Hz and a Q of 1 to remove any unwanted low frequencies. Another bell band was set at about 3.5kHz with a Gain of 2.5dB and a Q of 1 to emphasize the important frequencies in human speech (Yost, 2007).

Figure 13: Equalizing interview samples with Fabfilter’s Pro Q3

For compression, I have used CLA-76 (Figure 14), with the Attack set to 4, Release to 7 and Ratio to 4:1.

Figure 14: CLA-76 Compressor

To make the sample more distinctive, I added kHs Bitcrush (Figure 15) with the Quantize knob to 100% wet, Bits to 100%, Dither to 100%, Analog-to-digital conversion quality (ADC Q) to 0, Digital-to-Analog conversion quality (DAC Q) to 55%, and overall mix to 100% wet to achieve a warm vintage texture.

 

Figure 15: Degrading the sample quality with kHs Bitcrush

For the buildup, I loaded a sample of the tar into Synplant 2 (Figure 16) and modified the Envelope and LFOs. The plugin also has Frequency Modulation Oscillators and Filters to further shape the sound into a snare hit while still resembling the original sample.

 

Figure 16: Tar sample loaded and altered in Synplant 2

Similarly, the tar-snare sample was processed with Pro Q3 EQ with a Low cut at 330Hz and a Q of 1 to remove the muddy Low Mid Frequencies. A few saturation plugins namely Decapitator (Figure 17) and Saturn 2 were also added to enrich the sound together with a Valhalla Delay (Figure 18) set at 1/8 note and feedback at 22.4%.

Figure 17: Decapitator saturation plugin

Figure 18: Valhalla Delay

Lastly, Valhalla Vintage Verb (Figure 19) was added to seat the sound in the mix with the Mix knob set to 86.8%, Predelay set to 20ms, and Decay set to 3.08s to create a long, echoey tail.

Figure 19: Valhalla Vintage Verb

For the chorus or drop, Pok Jak’s vocals were altered in Harmor (Figure 20) to produce a simple melody accompanying the roaring synths. It was then processed with EQ, a voice manipulation plugin called Little AlterBoy (Figure 21) with the formant set to 3.3, mode to quantize and drive to 3.3, and Valhalla Vintage Verb.

Figure 20: Vocal chop and sampling in Harmor

 

Figure 21: Vocal manipulation in Little AlterBoy

For the second half of the drop, I pieced together a loop from recorded tar samples to mimic percussion such as tambourines and hi-hats for a sense of rhythmic movement (Figure 22).

 

Figure 22: Authentic tar samples

A section of a Rodat song (Figure 23) performed by Pok Jak was fitted in the ending to conclude the song.

 

Figure 23: Pok Jak solo performing a Rodat song

CONCLUSION

The research illustrates the great potential that music technology holds to preserve and revitalize classical artistic productions, as evidenced through the case study of the Rodat Ensemble. Focusing on Muyahayo, the ensemble’s flagship composition, the research outlined the unique micro-tonal and temporal aspects inherent in its classical vocal and instrumental techniques. Although those particularities posed hindrances to digitalization, they were successfully embraced and enriched through the use of AI-powered software tools like LALAL.AI, Musicfy, and Synplant 2.

Using the mixed-method approach in combination with a practice-led methodology, a new auditory work was successfully produced that blended the interpretative nuances of the Rodat with modern production techniques. This blending process both retained the essential elements of the Rodat Ensemble and re-framed them so as to appeal to and engage current audiences, particularly younger demographic groups.

This study reiterates the importance of preserving cultural heritage through creative means. These techniques present a working road map for other threatened forms of traditional music since they prove that careful changes while staying loyal to the initial material can result in the conservation and thriving of heritage in new forms. Nevertheless, it is imperative that specialists assess the effects to determine their uniqueness, effectiveness, and fidelity to the initial source material.

Some suggestions for future practices might include: collaboration with indigenous practitioners, the use of good-quality ethnic sound libraries, and educating the general public on fusion intent are all key strategies for innovation with respect.

REFERENCES

  1. Alten, S. R. (2013). Audio in Media (10th). Wadsworth, Cengage Learning.
  2. Awang, H. (2021). RODAT TERENGGANU – Warisan Yang Ditinggalkan [Video]. https://www.youtube.com/watch?v=o87KQFX7uyQ
  3. Briot, J. P., Hadjeres, G., & Pachet, F. D. (2020). Deep Learning Techniques for Music Springer.
  4. Bougueng Tchemeube, R., Ens, J., & Pasquier, P. (2025). Apollo: An Interactive Environment for Generating Symbolic Musical Phrases using Corpus-based Style     ResearchGate.
  5. Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods            research (2nd ed.). Thousand Oaks, CA: Sage Publications.
  6. Cope, D. (2000). The Algorithmic Composer. A-R Editions, Inc.
  7. Dash, A & Agres, K.R. (2023). AI-Based Affective Music Generation Systems: A Review            of Methods, and Challenges. ACM, New York, NY, USA.            https://arxiv.org/abs/2301.06890
  8. Deltorn, J., & Macrez, F. (2018). Authorship in the Age of Machine Learning and Artificial In S. M. O’Connor (Ed.), The Oxford Handbook of Music Law and Policy.
  9. Denzin, N. K., & Lincoln, Y. S. (Eds.). (2018). The Sage Handbook of Qualitative Research (5th ed.). Sage Publications, Thousand Oaks: CA.
  10. Faster Capital. (2024). AI generated content for music and sound effects https://fastercapital.com/content/Ai-generated-content-for-music-and-sound-effec  html#The-Benefits-of-Using-AI-Generated-Content-in-Music-Production
  11. Faster Capital. (2024). AI generated music new soundscape https://fastercapital.com/content/Ai-generated-music-new-soundscape.html#Brea
  12. king-Down-the-AI-Music-Creation-Process.html
  13. Fernández, J. D., & Vico, F. (2013). AI Methods in Algorithmic Composition: A Comprehensive Survey. Journal of Artificial Intelligence Research, 48, 513–582.
  14. Fiebrink, R., & Caramiaux, B. (2016). The Role of Machine Learning in Music Composition, Performance, and Interaction. Musicae Scientiae, 20(1), 73-86.
  15. Flasar, M. (2024). Mankind – Music – Technology. Technology In The Musical Thinking of the 20th and Early 21st Masaryk University Monographs.
  16. Hernandez-Olivan, C., & Beltran, J. R. (2021). Music Composition with Deep Learning: A
  17. Hosken, D. (2014). An introduction to music technology (2nd ed.). Routledge.Image-Line. (2024). FL Studio 21. https://www.image-line.com/
  18. Izotope. (2024). Ozone 11. https://www.izotope.com/en/products/ozone.html
  19. McNamara, A. (2012). Six Rules for Practice-led Research. Journal of Writing and Writing            Programs, 2012(S14), pp. 1-15. Musicfy Inc. (2023). Musicfy.https://musicfy.lol/
  20. Nasuruddin, M. G. (1991). Musik Melayu Tradisi. Selangor Darul Ehsan: Dewan Bahasa            dan Pustaka.Native Instruments. (2024). Kontakt Session Strings 2.    https://www.native-nstruments.com/en/products/komplete/cinematic/session-stri      ngs-2/
  21. Nair, S. V. (2025). Collaborative AI in Music Composition: Human-AI Symbiosis in Creative ResearchGate.
  22. Newman, M., Morris, L., & Lee, J. H. (2023). Human-AI Music Creation: Understanding the Perceptions and Experiences of Music Creators for Ethical and Productive     Proceedings of the International Society for Music Information Retrieval            Conference (ISMIR).
  23. NuEdge Development. (2024). Synplant 2.https://soniccharge.com/home
  24. Omnisale GmbH. (2024). LALAL.AIhttps://www.lalal.ai/
  25. Ozcan, U. (2023). Baiame (Didgeridoo Techno) [Video]. Youtube. https://www.youtube.com/watch?v=YHTUWqQVXMM
  26. Pasquier, P., Eigenfeldt, A., Bown, O., & Dubnov, S. (2017). A History of AI Music:            Integrating Artificial Intelligence into Music Composition and Performance.         ACM            Computing Surveys (CSUR), 50(3), 1-35.
  27. Redhead, T. (2025). Interactive Technologies and Music Making – Transmutable            Routledge: New York.
  28. Reich, S. (2014). Music for 18 Musicians [Video]. Youtube.            https://www.youtube.com/watch?v=71A_sm71_BI&t=1866s
  29. Ross, V. (2022). Practice-Based Methodological Design for Performance-Composition and            Interdisciplinary Music Research. Malaysian            Journal of Music, 11(1), 109-125.
  30. Ruthman, S. A., Mantie, R. (2017). The Oxford Handbook of Technology And Music            Oxford University Press: New York.
  31. Senior, M. (2018). Mixing secrets for the small studio (2nd ed.). Routledge.Smith-Muller, T. (2023). AI Music: What Musicians Need to Know. https://online.berklee.edu/takenote/ai-music-what-musicians-need-to-know/
  32. Suria Records. (2013). Cindai (Official Music Video) [Video]. Youtube. https://www.youtube.com/watch?v=dWFzE0NiGrI
  33. Synchro Arts. (2024). Revoice Pro 5.           https://www.synchroarts.com/revoice-pro-5
  34. Vear, C. (2022). The Routledge International Handbook of Practice-based Research. Routledge, Taylor & Francis Group: London and New York.
  35. WarisanNusantara, (2012). Lagu Rakyat Terengganu – Inang Rodat [Video]. Youtube. https://www.youtube.com/watch?v=QnIeLXD0E3c.
  36. Zagorski-Thomas, S. (2014). The musicology of record production. Cambridge University Press.
  37. Zulhezan, (2020). Hadhrat Rodat (2016) [Video]. Youtube. https://www.youtube.com/watch?v=rwDyY2AQeQ0

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

29 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER