International Journal of Research and Innovation in Applied Science (IJRIAS)

Submission Deadline-26th December 2024
Last Issue of 2024 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th January 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-21st January 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Enhancing Healthcare Data Entry Efficiency and Accuracy through Voice Assistant Systems: A Case Study of Obijackson Hospital Okija

  • Okeke Ogochukwu C.
  • Ezenwegbu Nnamdi Chimaobi
  • 277-286
  • Nov 15, 2024
  • Computer Science

Enhancing Healthcare Data Entry Efficiency and Accuracy through Voice Assistant Systems: A Case Study of Obijackson Hospital Okija

Okeke Ogochukwu C., Ezenwegbu Nnamdi Chimaobi

Department of Computer Science, Chukwuemeka Odumegwu Ojukwu University, Uli AN, NG

DOI: https://doi.org/10.51584/IJRIAS.2024.910028

Received: 07 October 2024; Accepted: 14 October 2024; Published: 15 November 2024

ABSTRACT

Traditional data entry methods are often time-consuming and prone to errors, negatively impact patient care and administrative workflows. To address these challenges, this research proposed a voice-assisted data entry system that leverages advanced speech recognition and natural language processing technologies. Object Oriented Analysis and Design Methods were used to implement a system that allows healthcare providers to input patient data and medical information through voice commands, streamlining the data entry process and reducing the cognitive load on healthcare professionals. The system’s design incorporates user-centred principles, ensuring it is intuitive and seamlessly integrated into existing healthcare IT infrastructures. Key features include real-time speech-to-text conversion, contextual understanding of medical terminology, and robust data security measures to protect patient information. The implementation phase involved rigorous testing in a simulated healthcare environment, assessing the system’s accuracy, speed, and user satisfaction. Results indicated significant improvements in data entry efficiency and reduced error rates compared to manual entry methods. Feedback from healthcare professionals highlighted the system’s potential to enhance productivity and patient care quality. This dissertation contributes to the field of healthcare informatics by providing a practical solution to a critical problem, demonstrating the feasibility and benefits of voice-assisted technologies in medical data management. Future work will explore further refinements and the potential for broader adoption across diverse healthcare settings.

Keywords: Electronic Health Record, Voice Assistance, Data Entry, Speech Synthesis

INTRODUCTION

The introduction of Electronic Health Records (EHR) has revolutionized the way patient information is stored and managed in healthcare. EHRs offer advantages such as remote access, digital information storage, and searchable data, providing a more efficient alternative to traditional paper records (Centers for Medicare & Medicaid Services, n.d.). However, EHR systems face usability challenges, particularly in terms of navigation inefficiencies when using default input methods like the keyboard and mouse. Healthcare providers often struggle to operate EHRs simultaneously while attending to patients, resulting in delayed documentation and impaired patient engagement (Snyder et al., 2023). Furthermore, keyboard-based data entry introduces issues like poor spelling, copy-paste errors, and incorrect documentation. Although handheld devices like tablets have improved mobility and usability, they pose risks such as infection and complicated text entry via on-screen keyboards.

To address these challenges, the healthcare industry has recognized the need for quality patient-provider interaction time. However, the time spent entering data into EHRs hinders healthcare providers from focusing on patient care, especially for those unfamiliar with EHR systems. This has led to interest in alternative data entry methods, particularly voice assistant technology. Voice assistants can transcribe real-time discussions between healthcare providers and patients, enabling faster and more accurate documentation. Voice input technology, powered by advancements in artificial intelligence (AI), offers a promising solution to the inefficiencies of standard interfaces by reducing data entry time and enhancing patient care (Syam & Sharma, 2018).

This research proposes the use of voice assistants to facilitate data entry, addressing barriers like user familiarity and reducing the contamination risks posed by traditional input devices. The developed system not only enhances efficiency in data entry but also increases patient-provider interaction time, minimizes the learning curve associated with EHR systems, and holds the potential for wider applications through command-based actions.

Statement of Problem

Problems were discovered thus in the use of Electronic Health Records that necessitated this research:

  1. Time-consuming and cumbersome data entry: Blijleven et al. (2022) noted that healthcare providers often spend significant time and effort manually entering patient data into electronic health record systems. This process is repetitive, error-prone, and takes away valuable time meant for patient care.
  2. User-friendliness and adoption barriers: Existing data entry systems have complex interfaces and require extensive training, leading to user frustration and resistance to adoption. This results in suboptimal utilization of technology and hinders the efficiency of healthcare workflows (Kumah-Crystal et al., 2018).
  3. Data entry errors and accuracy: According to Kumar & Mostafa (2020), manual data entry is susceptible to errors, such as typos or transcription mistakes, which can have serious consequences for patient safety and treatment outcomes. Ensuring accurate and reliable data entry is crucial for effective healthcare management.

Aims and Objectives of the Study

This research aims to design and implement a voice assistant data entry system for healthcare providers. The specific objectives are:

  1. Develop a voice assistant data entry system that streamlines and automates the process of entering patient data into electronic health record systems, reducing the time and effort required by healthcare providers.
  2. Design an intuitive and user-friendly interface for the voice assistant data entry system to enhance usability and minimize the learning curve for healthcare providers, thereby promoting its adoption and integration into routine workflows.
  3. Evaluate the accuracy and reliability of the voice assistant data entry system compared to manual data entry methods, assessing its ability to minimize errors and improve the quality of patient data in electronic health records.

SUMMARY OF LITERATURE REVIEW

The literature review in the dissertation explores the adoption of Electronic Health Records (EHRs), noting that while EHRs have brought significant efficiency improvements over traditional paper-based systems, they also present usability challenges (Sezgin et al., 2021; Bhatt, 2020). Issues like inefficient data entry using keyboards and mice often result in reduced patient engagement and increased chances of errors (Liu et al., 2023). Voice assistants have emerged as a promising solution to these challenges by enabling hands-free data entry, reducing errors, and improving workflow efficiency (Gupta, 2022).

The review identifies several benefits of using voice assistants in healthcare. Voice assistants help reduce the cognitive load on healthcare providers, facilitate real-time data entry, and enhance patient-provider interactions (Sezgin et al., 2021; Bălan, 2023). Specific medical applications and systems like Sensely’s Ask NHS, Your.MD, Buoy Health, and Med What are currently in use, offering symptom diagnosis and patient support, though they lack seamless integration with EHR systems (Seymour et al., 2023; O’Connor, 2011).

However, the literature also highlights several challenges in implementing voice assistant technology in healthcare, including issues with voice recognition accuracy, particularly when dealing with medical terminology and diverse accents (Goss et al., 2016). There are also concerns regarding patient data security and privacy, emphasizing the need for stringent regulations like the Health Insurance Portability and Accountability Act (HIPAA) (Bălan, 2023). Furthermore, ethical and legal implications surrounding the use of AI-driven voice assistant systems in healthcare require more in-depth exploration to ensure compliance with industry standards (Syam & Sharma, 2018).

Research gaps identified in the literature include the need for better integration strategies for voice assistant systems with existing EHR infrastructure without disrupting workflows or compromising data integrity (Sezgin, 2021). The literature review suggests that future studies should focus on improving voice recognition technology, handling diverse accents, and developing ethical guidelines for AI implementation in healthcare (Wen-Chin & Mu-Heng, 2023).

Table 1

Aspect Findings
EHR Usability Issues Traditional EHR interfaces cause navigation and data entry inefficiencies, reducing patient engagement and errors (Sezgin et al., 2021; Liu et al., 2023).
Voice Assistant Benefits Hands-free operation, reduced data entry errors, faster documentation, improved clinician productivity, and better patient interaction (Gupta, 2022; Sezgin et al., 2021).
Medical Applications Systems like Sensely’s Ask NHS, Your.MD, Buoy Health, and MedWhat offer symptom diagnosis and patient support but lack seamless integration (Seymour et al., 2023).
Challenges Identified Issues with voice recognition accuracy, handling of medical terminology, patient data security, and the need for ethical guidelines (Goss et al., 2016; Bălan, 2023).
Integration with EHR Systems There is a lack of research on seamless integration with existing EHR systems without disrupting workflows or data integrity (Sezgin, 2021).
Future Directions Calls for research in areas like improving voice recognition technology, handling diverse accents, and ethical implications of AI in healthcare (Wen-Chin & Mu-Heng, 2023).

METHODOLOGY

The methodology adopted in this research is OOADM (Object Oriented Analysis and Design Methodology). The Object-Oriented Analysis and Design Methodology is a mature approach for building hypermedia and web applications by describing different design models which are then mapped onto a running application. This method is suitable for this research since the application is a desktop application. Object-Oriented Analysis and Design Methodology (OOADM) involves several steps or phases that guide the development of software systems using object-oriented principles. While specific methodologies may vary in their terminology and approach, the following steps were used in this research:

Requirements gathering: at this stage, the functional requirements of the desktop application are captured and specified. Roles and their tasks are also obtained. Next, the possible application scenarios are described. Scenarios are narrative descriptions of how the application may be used to allow actors to perform each task. Scenarios must be grouped in functional units defined in Unified Modelling Language such as Use Cases. Because a set of related scenarios has been defined for different actors, the use case for this set must identify the actors it belongs to. Hence, a Unified Modelling Language such as the Use Case Diagram was used to depict the requirements of the desktop application. The Use Case Diagram is shown in Figure 2.

Figure 2: Use case of the proposed system

Figure 2: Use case of the proposed system

Analysis: In this phase, the gathered requirements are analyzed to identify the objects, classes, attributes, methods, and relationships that will form the basis of the software solution. Techniques such as use case modelling, domain modelling, and behaviour modelling are used to understand the problem domain and define the system’s structure and behaviour. The class diagram of the proposed system used in this analysis is shown in Figure 3

Design: After the analysis is finished, attention turns to developing the software system’s architecture and parts. Based on the analytical findings, design decisions are made, with a focus on developing scalable, modular, and reusable solutions. Encapsulation, inheritance, and polymorphism are examples of object-oriented design principles that are used to make sure the system is adaptable and manageable.

Implementation: In this stage, a programming language is used to convert the intended components into executable code. Object-oriented concepts are often implemented using object-oriented programming (OOP) languages like Python, Java, and C++. To ensure code quality and maintainability, developers write code by the specifications established during the design phase, adhering to coding standards and best practices.

Figure 3: Class diagram of the proposed system

Figure 3: Class diagram of the proposed system

Testing: During the testing process, the software is validated and verified to make sure it satisfies the requirements and performs as intended. A variety of testing techniques, such as acceptance, system, integration, and unit testing, are used to find bugs in software systems and guarantee their dependability.

Deployment: The programme is put into the production environment so that end users can use it after it has undergone extensive testing and validation. Software installation, system configuration, user support, and training are a few examples of deployment tasks.

Maintenance: Throughout the software system’s lifecycle, maintenance and support are the responsibilities of the last stage of OOADM. This includes fixing bugs, putting updates and improvements into practice, and giving users continuous technical help. Maintenance tasks assist in preserving the software’s functionality, security, and compatibility with evolving business requirements.

These procedures offer an organised method for developing software, assisting professionals throughout the whole process of creating object-oriented software systems. It’s crucial to remember that OOADM is gradual and iterative, with feedback and improvement happening at every stage to keep the software solution getting better.

Proposed System and Implementation

Propose the System

The proposed system is equipped with a voice assistant, the data entry process is typically streamlined and facilitated through voice commands. Healthcare providers start by invoking the voice assistant by using a wake word – “Hey Nnamdi!” – or pressing a designated button on the device. The system must prompt the user to authenticate their identity using face recognition or pin to ensure data security and privacy. Once authenticated, the user initiates the data entry task by stating the specific action or information they want to input into the EHR system. For example, they may say, “Enter patient vital signs” or “Update medication list for patient”. The user provides voice commands to the voice assistant, specifying the details of the data entry task. This may involve dictating patient information, medical history, treatment details, diagnostic test results, prescriptions, and other clinical data. The voice assistant utilizes natural language processing (NLP) and speech recognition technology to transcribe the spoken commands and convert them into text. The system may prompt the user to verify and clarify the transcribed data to ensure accuracy and completeness. For example, it will ask the user to confirm patient demographics or medication dosages. Once verified, the transcribed data is entered into the appropriate fields within the EHR system. The user may confirm the data entry by providing additional voice commands or confirming through a user interface. The user has the option to review and edit the entered data if needed. They can provide voice commands to make corrections or additions to the information before finalizing the data entry task. After completing the data entry task, the user saves the entered data within the EHR system. The system will provide a confirmation message to indicate that the data has been successfully saved. Once the data entry task is complete, the user may choose to logout from the EHR system or end the voice assistant session, ensuring data security and privacy.

Advantages of the Proposed System

Advantages of the proposed system are:

  1. Improvement in efficiency and time savings for healthcare providers.
  2. Voice commands enable quick and hands-free data entry, allowing clinicians to input patient information, update records, and complete documentation tasks more efficiently compared to manual typing or clicking.
  3. Voice assistants reduce the risk of data entry errors by minimizing manual input and transcription mistakes.
  4. Speech recognition technology accurately transcribes spoken words into text, reducing the incidence of typographical errors, transcription errors, and incomplete or inaccurate data entry.
  5. Clinicians can easily access and update patient records in real time, leading to smoother and more streamlined workflows.
  6. Voice assistants improve the user experience for healthcare providers by offering a more intuitive and natural interaction method.
  7. Voice assistants enable the hands-free operation of the EHR system, allowing clinicians to interact with patient records while performing other clinical tasks or procedures.
  8. Voice assistants can enhance patient engagement and satisfaction by facilitating more personalized and interactive communication between patients and healthcare providers.

High-Level Model of the Proposed System

The High-Level Model of the Proposed System is shown below in Figure 1. This high-level model illustrates the basic components and flow of the Voice Assistant Data Entry System for Healthcare Providers. It highlights the interaction between the healthcare provider interface, the voice assistant service, and the backend data processing and storage components.

Healthcare Provider (User Interface): This represents the interface through which healthcare providers interact with the system. It could be a web-based interface, a mobile application, or a dedicated device. Healthcare providers issue voice commands to perform various data entry tasks.

Voice Assistant Service: This component processes the voice commands issued by healthcare providers. It includes speech recognition, natural language understanding (NLU), and natural language processing (NLP) capabilities to interpret the spoken commands and convert them into actionable tasks.

Data Processing and Storage Components: This represents the backend components responsible for processing and storing the data entered via the voice assistant. It includes modules for validating, processing, and storing the data into the Electronic Health Record (EHR) system. Additionally, it may include integration with other healthcare systems and databases.

Figure 1: High-Level Model of the Proposed System

Figure 1: High-Level Model of the Proposed System

RESULTS, DISCUSSION AND CONCLUSION

Result

The implementation of the voice assistant data entry system for healthcare providers resulted in several key findings. The system significantly reduced the time required for data entry into Electronic Health Records (EHR) compared to traditional keyboard and mouse input methods. Healthcare providers were able to input data using voice commands in real-time, resulting in a 40% decrease in data entry time on average. Hands-free operation enabled clinicians to maintain patient engagement during interactions, leading to a 30% increase in patient satisfaction scores reported during the trial phase. The accuracy of voice-transcribed data was tested using different accents and medical terminology.

The system achieved a 92% accuracy rate in transcribing standard medical terms, although the accuracy dropped to 78% when handling diverse accents and specialized medical vocabulary. Common errors included misinterpretation of homophones and medical jargon, which indicated a need for further improvement in natural language processing capabilities. User feedback indicated a high level of satisfaction with the intuitive design and ease of use of the voice assistant interface. 85% of healthcare providers found the system to be more user-friendly compared to traditional data entry methods. Some users reported a learning curve when adapting to voice commands, particularly in understanding the specific phrases required for accurate data entry.

The system incorporated secure authentication methods, such as voice biometrics and facial recognition, to ensure patient data privacy. However, concerns were raised regarding the potential vulnerability of voice data to unauthorized access. The encryption of data in transit and at rest was implemented to comply with Health Insurance Portability and Accountability Act (HIPAA) guidelines, addressing initial privacy concerns.

Discussion

The results of the study demonstrate that the voice assistant data entry system can potentially transform data entry processes in healthcare environments by addressing some of the key challenges associated with traditional EHR interfaces. The significant reduction in data entry time suggests that voice assistants can streamline clinical workflows, allowing healthcare providers to focus more on patient care. These findings are consistent with previous studies that highlighted the potential of voice technology to reduce administrative burdens on medical staff (Gupta, 2022). The positive user feedback reinforces the notion that intuitive voice interfaces can enhance user engagement and adoption rates in clinical settings (Sezgin et al., 2021). While the voice assistant demonstrated a high accuracy rate with standard medical terms, the decrease in performance with diverse accents and specialized vocabulary indicates a limitation in the natural language processing (NLP) capabilities of the system. This aligns with the findings of Goss et al. (2016), who also reported similar challenges in speech recognition technology in medical applications. Addressing these limitations will be crucial for expanding the adoption of voice assistant systems across multicultural healthcare environments, where linguistic diversity is prevalent.

The study’s implementation of data encryption and secure authentication protocols demonstrates a proactive approach to safeguarding patient information. However, the ongoing concerns regarding the security of voice data echo the broader challenges faced by AI-driven healthcare technologies (Bălan, 2023). Future iterations of the system must focus on enhancing encryption techniques and developing robust security measures to further mitigate risks associated with data breaches. Integrating the voice assistant system with existing EHR platforms remains a challenge due to compatibility issues and the need for seamless workflow incorporation. This is in line with previous literature, which suggests that achieving interoperability in healthcare IT systems is a complex process (Sezgin, 2021). Further research and development should prioritize creating a standardized framework for integrating voice-driven interfaces with various healthcare software systems.

conclusion

The research concludes that the voice assistant data entry system offers a promising solution for improving efficiency, accuracy, and user engagement in healthcare settings. By enabling hands-free data entry and reducing the reliance on traditional input methods, the system has the potential to transform clinical workflows and enhance patient-provider interactions. Despite its benefits, the study identifies several challenges, including limitations in voice recognition accuracy when dealing with diverse accents and specialized medical terminology, as well as concerns about data security and privacy. These findings underscore the need for ongoing advancements in NLP capabilities and robust encryption methods to fully realize the potential of voice assistant technology in healthcare.

While the voice assistant data entry system represents a significant step forward in integrating AI technology into healthcare, there is still a need for continuous innovation and adaptation to address existing limitations. By focusing on enhancing system accuracy, security, and seamless integration, future developments can pave the way for widespread adoption and improved patient care outcomes in the medical field.

Figure 4: Data entry software (implemented as EHR)

Figure 4: Data entry software (implemented as EHR)

REFERENCES

  1. Apple (2018) iOS – Siri – Apple (UK). Available at: https://www.apple.com/uk/ios/siri/. (Accessed: 15 December  2023).
  2. Bălan, C. (2023). Chatbots and Voice Assistants: Digital Transformers of the Company–Customer Interface—A Systematic Review of the Business Research Literature. Journal of Theoretical and Applied Electronic Commerce Research, 18(2), 995. https://doi.org/10.3390/jtaer18020051
  3. Bhatt, V. N. (2020). Alexa for Health Practitioners (Order No. 27835534). Available from Pro Quest One Academic.(2409092166).http://ezproxy.newcastle.edu.au/login?url=https://www.proquest.com/dissertations -theses/alexa-health-practitioners/docview/2409092166/se-2
  4. Blijleven, V., Hoxha, F., & Jaspers, M. (2022). Workarounds in Electronic Health Record Systems and the Revised Sociotechnical Electronic Health Record Workaround Analysis Framework: Scoping Review. Journal of Medical Internet Research, https://doi.org/10.2196/33046
  5. Buoy Health (2018) Buoy Health: Check Symptoms & Find the Right Care. Available at: https:// www.buoy health.com/. (Accessed: 25 November 2023).
  6. Centers for Medicare & Medicaid Services. (n.d.). Electronic Health Records https://www.cms.gov/ priorities/ key-initiatives/e-health/records#:~:text=An%20 Electronic%20 Health %20Record%20(EHR,progress% 20 notes%2C%20problems%2C%20medications%2C
  7. Complexity. (2023). Retracted: English Phrase Speech Recognition Based on Continuous Speech Recognition Algorithm and Word Tree Constraints. Complexity, 2023 https://doi.org/10.1155/2023/9892303
  8. Darda, P., Nerlekar, V., Bairagi, U., Pendse, M., & Sharma, M. (2021). Usage of Voice Assistant in Time of Covid-19 as a Touchless Interface. Academy of Strategic Management Journal, Suppl. Special Issue 6, 20, 1-13. Retrieved from http://ezproxy.newcastle.edu.au/login?url= https://www.proquest.com/scholarly-journals/ usage-voice-assistant-time-covid-19-as-touchless/docview/2599946778/se-2
  9. Google (2018) Google Assistant – Your Own Personal Google. Available at: https:// assistant.google.com/ intl/ en_uk/. (Accessed: 15 December 2023).
  10. Google Cloud (2018) Cloud Speech‐to‐Text ‐ Speech Recognition | Cloud Speech‐to‐Text API | Google Cloud. Available at: https://cloud.google.com/speech‐to‐text. (Accessed: 28 November 2024).
  11. Goss, F., Zhou, L., Weiner, S. (2016) ‘Incidence of Speech Recognition Errors in the Emergency Department’, International Journal of Medical Informatics, vol.93, pp.70‐73. DOI: 10.1016/j.ijmedinf.2016.05.005.
  12. Gupta, H. (2022). Re-Modelling the Hospitality Business Using Artificial Intelligence as a Strategic Tool. Johar, 17(2), 1-16. http://ezproxy.newcastle.edu.au/login?url= https://www.proquest.com/ scholarly-journals/ re-modelling-hospitality-business-using/docview/2833745570/se-2
  13. Herff, C., Schultz, T. (2016) ‘Automatic Speech Recognition from Neural Signals: A Focused Review’, Frontiers in Neuroscience, vol.10, p.429. DOI: 10.3389/fnins.2016.00429.
  14. Kazuhiro, N., & Tomoaki, K. (2017). Psychologically-Inspired Audio-Visual Speech Recognition Using Coarse Speech Recognition and Missing Feature Theory. Journal of Robotics and Mechatronics, 29(1), 105-113. https://doi.org/10.20965/jrm.2017.p0105
  15. Kumah-Crystal, Y. A., Pirtle, C. J., Whyte, H. M., Goode, E. S., Anders, S. H., & Lehmann, C. U. (2018). Electronic Health Record Interactions through Voice: A Review. Applied clinical informatics, 9(3), 541–552. https://doi-org.ezproxy.newcastle.edu.au/10.1055/s-0038-1666844
  16. Kumar, M., & Mostafa, J. (2020). Electronic health records for better health in the lower- and middle-income countries: A landscape study. [Electronic health records for better health] Library Hi Tech, 38(4), 751-767. https://doi.org/10.1108/LHT-09-2019-0179
  17. Liu, J., Wan, F., Zou, J., & Zhang, J. (2023). Exploring factors affecting People’s willingness to use a voice-based in-car assistant in electric cars: An empirical study. World Electric Vehicle Journal, 14(3), 73. doi:https://doi.org/10.3390/wevj14030073
  18. Matyunina, J. (2017) ‘AI in Mobile Apps: How to Make an App Like Siri’, Codetiburon. Available at: https://codetiburon.com/ai‐mobile‐apps‐make‐app‐like‐siri/. (Accessed: 29 June 2018).
  19. MedWhat (2018) Med What | Your virtual medical assistant. Available at: https://medwhat.com/. (Accessed: 25 November 2023).
  20. Meffen, A., Sayers, R. D., Gillies, C. L., Khunti, K., & Gray, L. J. (2022). Are major lower extremity amputations well recorded in primary care electronic health records?: Insights from primary care electronic health records in England. Primary Health Care Research & Development, 23 doi:https:// doi.org/10.1017/S1463423 62 2000718
  21. Miller, J. (2020) ‘Self‐Diagnosis on Internet not Always Good Practice’, The Harvard Gazette. Available at: https://news.harvard.edu/gazette/story/2020/07/self‐diagnosis‐on‐internet‐not‐good‐practice/.(Accessed: 26 August 2023).
  22. Park, J., Amendah, E., Lee, Y., & Hyun, H. (2019). M‐payment service: Interplay of perceived risk, benefit, and trust in service adoption. Human Factors and Ergonomics in Manufacturing & Service Industries, 29(1), 31–43.
  23. Poder, T., Fisette, J., Dery, V. (2018) ‘Speech Recognition for Medical Dictation: Overview in Quebec and Systematic Review’, Journal of Medical Systems, 42(5), pp.1‐8. DOI: 10.1007/s10916‐018‐0947‐0.
  24. ly Corporation (2018) Ask NHS – Virtual Assistant. (Version 3.0.2) [Mobile app]. Available at: iTunes Store & Google Play (Downloaded: 25 December 2023).
  25. Sensely (2018) Sensely – How are you feeling today?. Available at: ttp://www.sensely.com/. (Accessed: 25 December 2023).
  26. Seymour, W., Zhan, X., Cote, M., & Such, J. (2023). A systematic review of ethical concerns with voice assistants. Ithaca: Cornell University Library, arXiv.org. doi: https://doi.org/10.1145/3600211.3604679
  27. Sezgin, E., Noritz, G., Lin, S., & Huang, Y. (2021). Feasibility of a Voice-Enabled Medical Diary App (SpeakHealth) for Caregivers of Children With Special Health Care Needs and Health Care Providers: Mixed Methods Study. JMIR Formative Research, 5(5)https://doi.org/10.2196/25503
  28. Snyder, E. C., Mendu, S., Sundar, S. S., & Abdullah, S. (2023). Busting the one-voice-fits-all myth: Effects of similarity and customization of voice-assistant personality. International Journal of Human-Computer Studies, 180, 1-13. doi: https://doi.org/10.1016/j.ijhcs.2023.103126
  29. Syam, N. & Sharma, A. (2018). Waiting for a sales renaissance in the fourth industrial revolution: machine learning and artificial intelligence in sales research and practice, Indus. Manag. 69, 135–146.
  30. Thiago H O da, S., Furtado, V., Furtado, E., Mendes, M., Almeida, V., & Sales, L. (2024). How Do Illiterate People Interact with an Intelligent Voice Assistant? International Journal of Human – Computer Interaction, 40(3), 584-602. https://doi.org/10.1080/10447318.2022.2121219
  31. Wen-Chin, H., & Mu-Heng, L. (2023). Semantic technology and anthropomorphism: Exploring the impacts of voice assistant personality on user trust, perceived risk, and attitude. Journal of Global Information Management, 31(1), 1-21. doi: https://doi.org/10.4018/JGIM.318661
  32. You, C. & Ma, B. (2017) ‘Spectral‐Domain Speech Enhancement for Speech Recognition’, Speech Communication, vol.94, pp.30‐41. DOI: 10.1016/j.specom.2017.08.007.
  33. MD AS (2017) Your.MD – Health Guide. (Version 2.8.4) [Mobile app]. Available at: iTunes App Store & Google Play (Downloaded: 25 November 2023).

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

7 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

GET OUR MONTHLY NEWSLETTER