Enhancing Design Decision-Making in the Artificial Intelligence (AI) Era: A Communication-Centric Framework
- Izzuddinazwan Misri
- Nur Farazilla Mohd Arsad
- Khairun Nisa Mustaffa Halabi
- 1552-1568
- Sep 1, 2025
- Artificial intelligence
Enhancing Design Decision-Making in the Artificial Intelligence (AI) Era: A Communication-Centric Framework
Izzuddinazwan Misri, Nur Farazilla Mohd Arsad, Khairun Nisa Mustaffa Halabi
Faculty of Creative Industries, City University Malaysia
DOI: https://dx.doi.org/10.47772/IJRISS.2025.908000129
Received: 26 July 2025; Accepted: 01 August 2025; Published: 01 September 2025
ABSTRACT
This study explores the evolving role of Artificial Intelligence (AI) in design decision-making, with a particular emphasis on its impact on human cognitive faculties, including creativity, critical thinking, and intuition. As AI- driven systems increasingly redefine traditional decision-making paradigms through data-driven automation, this research examines the interplay between AI and human sense-making within innovative design processes. Adopting a communication-centric framework, the study underscores the significance of effective collaboration between AI technologies and human designers in enhancing problem-solving capabilities across organizational and creative contexts. While AI enhances efficiency, pattern recognition, and technical rationality, human intuition remains essential for ensuring ethical, contextually aware, and creatively robust design solutions.
Drawing on Karl Weick’s sense-making theory, this research provides a structured approach to understanding AI’s role in augmenting rather than supplanting human creativity. The findings highlight the necessity of interdisciplinary collaboration, AI transparency, and iterative feedback mechanisms to sustain a balanced integration of AI-driven insights with human-centered decision-making. Ultimately, this study contributes to the advancement of AI-assisted design frameworks that prioritize both efficiency and human innovation.
Keywords: AI-driven innovation, Design Management, Human cognitive abilities, Automation and creativity, Human-AI collaboration.
INTRODUCTION
Background and Motivation
The sector of Artificial Intelligence (AI) is transforming innovation, and incorporating human-centered design, which forces questions regarding the manner in which design choices are brought to realization. According to Verganti, Vendraminelli, and Lansiti (2020), AI motivates the novel styles of problem framing and innovation. The given study fills a gap by branching out the sense-making theory developed by Weick (1995) into AI-assisted design, building a communication-oriented design framework to apply human thinking capabilities to AI in diverse fields of design, in terms of product, architectural, and artistic design, among others. Communication issues are also very important in establishing coherence of AI outputs with human values and creativity (Bader & Kaiser, 2019; Shollo & Galliers, 2024). There is a risk of gaining morally or even creatively irrelevant insights without having defined strategies (Sreenivasan & Suresh, 2024). The paper examines how AI is used to assist in methods of data analysis and pattern recognition, whereas humans provide their moral judgement, intuition, and the context of the action. The combination of such technologies as AI, prototyping, and user testing contributes to better design outputs without putting a burden on work efficiency and human mindfulness. In the design sense-making requires a common interpretation among groups. Ineffective communication may become prejudicial or poor judgment, or restricted faith in AI (Grassucci et al., 2024). In addition, explainability has also been an issue in AI, with many working from a black box style (Chennam et al., 2022). Open, repetitive and interdisciplinary collaboration is also needed to make sure that AI supplements, but does not substitute, human expertise.
Research Problem and Objectives
The automation of learning and decision-making by AI is transforming education and innovation, reshaping not only design processes but also the way ideas and insights are communicated across teams. This shift extends beyond traditional design paradigms, influencing how information is shared, interpreted, and applied in innovation. AI is expected to enhance innovation performance, customer-centricity, and creativity, yet its successful integration depends not just on technical efficiency but also on how well AI-generated insights are communicated and understood. While AI-driven design differs from traditional, manpower-based approaches, its role in innovation frameworks must address the challenges of interpretation, collaboration, and ethical decision- making. AI’s ability to structure and define problems often surpasses human capacity, making it essential to establish clear communication strategies that ensure AI-driven solutions remain aligned with human creativity, ethics, and design objectives.
To bridge these gaps, this study applies Karl Weick’s sense-making theory (1995) as a framework for analyzing how collective cognitive and communication processes shape an organisation’s actions and decisions. By integrating sense-making theories into AI-driven design, this research examines not only the technical and cognitive aspects of AI integration but also the role of communication in facilitating human-AI collaboration. In exploring the relationship between AI, design, and leadership, this study aims to advance knowledge on how AI can be leveraged through effective communication strategies to enhance innovation and decision-making in future design systems.
LITERATURE REVIEW
Sense-making: In the perspective of leadership and organization
The new leadership skills arising due to impact AI have influenced decision-making and the resolution of problems. AI lessens the emphasis on repetitive activities so that leaders should address adaptive challenges in uncertain contexts- a fact that aligns with the vision of Weick (1995), sense-making as the response to organisational threats. Ancona (2012) also discusses sense-making as a powerful leadership practice, and Matlis and Christianson (2022) address the importance of sense-making as a solution to creativity as well as change. Weick’s framework refers to enactment, selection, and retention; it is used in design to chart the human interaction within contexts that keep changing. Both AI and human designers complement each other, where AI is effective at the later processes of use because it manipulates complex data (Verganti et al., 2020a, 2020b), whereas human designers excel in early phases of design because of intuition and experience gathered over time. Unclear client requirements lead most of the time to ambiguity in leadership, which necessitates the interpretation of complex situations by leaders (Bevelin, 2006). Risk management solutions lean towards proactive solutions, as opposed to being reactive. The leaders of today are now able to integrate different opinions to achieve complexity (Ancona, 2011; Ancona et al., 2020) and sense-making helps them to identify patterns and simplify a decision. According to Weick (1995), it is through the lived experiences of membership that organisational culture is formed, and a consensus is not the influence. Organisations cope with uncertainty and learn through experience by processing the knowledge with the help of reflection and cognitive processing (Dougherty, 2020; Nardon & Hari, 2022). In this paper, I will argue that sense-making is central to design leadership as it is one of the paradigms on design leadership that attaches importance to both understanding and imagination to deal with complexity (Brown, 2009; Liedtka, 2015; Dell’Era et al., 2020; Kolko, 2015).
Integrating Human Cognition with AI: The Role of Sense-Making in Design
AI can be an effective assistive tool in the creation process with its data analyses of huge amounts of information and improved resourcefulness (Ligon et al., 2020; Kolbjofsrud et al., 2017). Nevertheless, it can confuse designers and results can lack novelty as they tend to repeat the previous patterns (Berdahl et al., 2022). Human sense-making is required to make the output of AI more ethically and contextually appropriate (Verganti et al., 2020a) since AI would fail to guarantee both normative and practical suitability. Language processing and image recognition are some of the machine learning technologies that are now integrated into normal activities. However, because of the nature of the AI that requires so-called thin data, it becomes insensitive to individuality, cultural, and emotional nuances, which are the focal point of the design (Madsbjerg, 2018). Although efficient, AI is both emotionally and morally not very sound, especially in such an uncertain area such as healthcare (Verganti et al., 2020a, 2020b). It emphasises the increasing demand to combine human intuition and machine intelligence, which makes sense-making theories reconsider and redesign them.
Use of AI would allow generating a hypothesis and free designers of repetitive activities (Amrollahi & Ghapanchi, 2021), yet, the lack of transparency of decisions made by AI is a problem that arises due to the black box nature of AI (Gunning et al., 2019). Alternatively, despite the promising results in organizing structural problem-solving, AI lacks qualities related to cultural and emotional design requirements, which means that human creativity and moral sense remain vital parts of the process (Elsbach & Stigliani, 2018). Following the argument by Floridi (2020), human social cognition cannot be duplicated by AI. To this end, in the future, design should not only be an exercise calculated in terms of energy efficiency but a humanised design as well that would not only be practical but as ethical.
Figure 1: The principles and relationships of Sense-making in the AI era
The conventional human cognition of pattern finding and Making Sense is changing where it is coupled with AI, which opens new possibilities and complications. The five findings highlighted in our study indicate how technology is altering five areas that underpin sense-making in the age of AI. Originally, human-AI interaction improves decision making through AI computation and data processing; humans use understanding and value (Smith et al., 2021). The principles of ethical decision making are important, especially when it comes to issues such as bias in access by artificial intelligence, where machine learning is used in areas such as police work and in the medical field. Real applications of artificial intelligence must be properly supervised by humans (Thomas, Lewis, & Couture, 2023). One is the issue of data deluge where AI is efficient at handling voluminous data while human workers are efficient at understanding the data, and making efficient use of it (Johne et al., 2020). Moreover, AI makes it possible to act in real-time, which is crucial for competitiveness when operating in such volatile contexts as often described by Huang & Zhang (2022). Finally, the interpretability of the model is crucial for trustful AI since to make proper decisions, users should understand how AI works (Miller, 2022). Such related principles constitute a coherent system for hyping sense-making in the context of AI affordances.
Communication in Human-AI Collaborative Design
Effective communication is essential in AI-integrated design, influencing how AI-generated insights are understood, interpreted, and applied by human decision-makers. While AI enhances data analysis, predictive modelling, and design automation, human designers must translate AI-driven outputs into meaningful, creative, and ethically sound design decisions. Weick’s sense-making theory (1995) emphasizes that decision-making is not just a cognitive process but also a social and communicative one, where individuals construct meaning through interaction and feedback. As noted by Vössing et al. (2022), communication ensures that human designers can collaborate effectively with AI systems, clarify ambiguous outputs, and align AI recommendations with organisational goals.
A key challenge in human-AI collaboration is the lack of transparency in AI decision-making, often referred to as the “black box” problem (Von Eschenbach, 2021). Chander et al. (2024) added that AI models generate solutions based on complex algorithms and vast datasets, yet without proper explanation mechanisms, designers may struggle to trust, interpret, or refine AI-driven suggestions. This highlights the need for explainable AI (XAI) frameworks, which incorporate clear, interpretable, and interactive communication strategies to improve human-AI synergy (Mohammed, 2024).
Moreover, interdisciplinary collaboration in AI-assisted design requires strong communication practices among designers, engineers, data scientists, and business leaders (Zhang et al., 2024). Effective cross-functional communication fosters better problem-solving (Adegbola et al., 2024), ethical considerations (Attah et al., 2024), and user-centered design thinking (Nedeltcheva & Shoikova, 2017). Without these communication structures, AI-generated design solutions risk becoming misaligned with human needs, cultural contexts, or ethical standards.
PROPOSED FRAMEWORK
The improvement of AI decision-making makes the most sense when it emulates human cognitive talents such as ethical decision-making, creativity, and complex data analysis. Like any computer algorithm, AI is exceptionally good at handling and interpreting big data to which conscience and issues of ethics apply. It is only here that the human input is called for; after AI has performed an analytical task, humans make decisions that will be ethical. The Proposed Concept and Framework of human-AI interactive Design can be best visually presented in the Layered Structure of Scheme that shows how exactly Human intelligence and Artificial intelligence features work at multiple levels. AI operates on data processing and modelling of the results and conclusions, while humans make the problem and ethical decision on moments for action, so this makes the AI and human team more accountable and efficient.
Figure 2: Integrated Framework for Human-AI Collaborative Design Decision-Making (idea-visualised from Song B, Zhu Q, Luo J., 2024)
The layered model depicts Human-AI synergy within the context of decision making for design and instantiates three layers used. The highest area of the circle (blue) is the province of creativity, ethical reasoning and lateral thinking. Superiority of humans is in the handling of stochastic, dealing with the fluctuations and inconsistencies, which remain sharks for AI, also, contextual meaning and ethical frameworks are introduced into the equation. The Middle Layer (green) describes the Interaction Area, as it involves both humans and AI. AI produces results that are specific and based on patterns detected during data analysis, while humans, knowing the result, act creatively and make design choices. It opens a loop that alternates between AI and human decisions based on prior suggestions made by the AI system. Last and foremost, the Bottom Layer (grey) captures AI system capabilities like speed, efficiency, and dealing with big data. AI can implement solutions much faster than most people can recognise the need for the pattern and those solutions to be applied; AI decisions are based on the provided set of rules and are not necessarily good or purely moral decisions when provided with practically any context.
The framework incorporates Weick’s components of sense-making, which are enactment, selection, and retention, and apply them to design practices that promote co-equal partnership between human imagination and AI computing. This mutualistic dependence supports opportunities to develop designs that are adaptable and creative. Table 1 discusses how design expertise and data intelligence intersect, replacing conventional idea sense-making with a theoretical paradigm specifically designed to support design decision-making, guaranteeing utility, social relevance, and originality.
Table 1: The Human-AI Collaborative Design Decision-Making Outlines. Song, B., Zhu, Q., & Luo, J. (2024), Seeber, I., Bittner, E., Briggs, R. O., De Vreede, G.-J., De Vreede, T., Elkins, A. M., … & Maier, R. (2020), Papachristos, E., Johansen, P. S., Jacobsen, R. M., Bysted, L. B., & Skov, M. B. (2021), Keswani, V., Mohan, M., Balakrishnan, S., Kesavan, S. K., & Rajagopal, A. (2021)
In the Problem-Identification phase of the design process, human choice is central because the designers establish the context of the design problem; AI assists in the process by alerting humans to available data and prior experiences to pull out patterns for problem strategies. In my view, this cooperation effectively fulfils the integration of human context into AI algorithms and machine-generated data is a stable base for design. After that, the Data Collection phase collects information to be used in other phases. Although AI offers a way to process big data with high effectiveness, human designers must ensure that the data meets the contextual requirements defined at the enactment phase.
In the Data Analysis phase of the project, AI has a very important role in that it comes up with data that has been collected. He or she also designs with the ability to mine big data, as the process of making decisions based on big data information is performed by analysing big data through intuition, creativity, and domain expertise. Such integrative cooperation guarantees that the design solutions developed are both innovative and reasonable at the same time. The Evaluation step means the utilization of both AI and the human factor for assessing the design solutions. Although AI gives measures for performance, the human designers take functional and aesthetic factors into design improvement. It focuses on maintenance, as results from the evaluations help in future cycles. As in the Iteration phase, users and AI designers cooperate actively, reflecting on designs and improving designs in response to new information. This continuing process reinforces the fact that design can always be made better. The important idea borrowed from Weick and added to this model is the Social Context through which framing occurs. While human designers make sure that designs are compliant with societal, ethical and cultural practices, AI helps identify trends. Altogether, this framework shows the way human cognition and artificial intelligence’s data processing form a whole design strategy, amending and improving the entire design solutions.
Sense-making approach in AI-based design practice
An active application of AI is neither entirely disadvantageous nor entirely beneficial. On the one hand, AI is beneficial by bringing data processing and pattern recognition capabilities to the design process; on the other hand, human designers’ creativity, instinct and experience are critical in the operation. And therefore, to make synergy from these two possible, there is a need to put in place a framework to support human-AI cooperation. Here, it presents a highly integrated framework that elevates sense-making theory and AI-based design practices and identifies a clear path for how AI and human cognition work together in support of these design decisions.
Figure 3: Cyclical Diagram of the Sense-Making Approach in AI-Based Design Practice (adapted from Jarrahi, M. H., & Eshraghian, K., 2019)
Figure 3 illustrates an iterative, sense-making approach in AI-based design, integrating Karl Weick’s sense-making components across six phases: Problem definition, data gathering, data assessment, structure and prototype development, assessment phases and feedback. This cycle integrates both AI and human-based processes, and the green circles represent the former while the blue circles represent the latter. The most significant manifestation of human activities, including creativity, intuition, and critical judgments, occurs. Whereas the concept of “Enactment” by Weick holds good during the Problem Definition and Design phases because humans are programming themselves to operate the designed environments. Phases like Data Gathering and Analysis reflect the application of data pattern recognition and optimisation of AI, whereas Weick’s ‘Selection’ has been defined as the function where large datasets are analysed by AI and then solutions are suggested. The Feedback and Learning phases combine artificial intelligence and human traits, integrating human imagination with an attempt by an AI to perceive things like art, to attempt to understand it, provided with artistic work to seek out its underpinnings. Here, Weick’s “Retention” component saves significant data for future improvements in the designs. Such a cyclical chain, where AI and human agents perform different yet closely linked activities, best collaborates the strengths of human flexibility and AI optimisation in design practice. The table below provides such arranged information that provides better insights into these interactions across the phases.
Table 2: Descriptive guideline for Sense-Making Approach in AI-Based Design Practice (adapted from Jarrahi, M. H., & Eshraghian, K. ,2019)
By using the structured table below, In the following discussion, the correspondence of Weick’s sense-making components to dependencies between AI and human design is shown at different phases. In the human activities of the Enactment phase, especially in Problem Identification and Data Collection, humans are proactive in setting the agenda while they interpret various sources and construct decision-making perspectives. AI helps in this respect by collecting information, identifying patterns and supplying analysis to assist the human worker, who, one presumes, has been hired for more complex, less routine tasks. Using the Selection phase, AI performs the Analysis step in which patterns relevant to the problem-solving process are sought in extensive data. Human involvement is particularly important in Design and Prototyping, where, through a combination of the knowledge derived from the AI outcomes, designers can realise models. In Design and Prototyping, AI and humans assess the results of design and reflect on the findings of the evaluation carried out in the previous step of the process. Prototypes are evaluated by AI based on predetermined goals and expectations. These evaluations inform the designers about how they could improve upon the prototype. AI informs design decisions and designs in their overall conception of the model by having AI insights and the designer’s thoughts about aesthetics, ethical considerations and feasibility. The Retention component enhances the understanding that in each design cycle, AI and participants retain certain knowledge. This dynamic interaction helps augment the design process by blending in AI computation with human thinking, and as depicted here, shows how AI-based design practices correlate with Weick’s Themes and Patterns in Sense Making (Weick, 1995; Ancona, 2011).
To support this understanding, the presented (Figure 7) Design Process Phases with Sense-making Integration framework shows where and how human cognition and AI are applied to the design process. Reciprocity arrows show the information transfer, while decision support is indicated by the feedback loops since the two systems interrelate. This figure overviews this design process from the Problem Identification and Iteration and explains why people are engaged in Enactment, Selection, and Retention tasks. This integration is to enhance the designing and developmental stages and to begin and continuously enhance human–AI partnership during processes and decisions.
Figure 4: Design Process Phases with Sense-making Integration (Jarrahi, M. H., & Eshraghian, K. 2019, Keswani, V., et al. 2021)
The phases go sequentially, with the arrows being drawn from one phase to another, beginning with Problem Identification leading to Iteration. Sense-making integration is also added beneath each phase to demonstrate how it comes into the picture at every stage. Thus, the flowchart called The Design Process Phases with Sense-Making Integration denotes how the design process occurs and includes corresponding principles of sense-making at each of the mentioned phases. In Problem Identification, humans engage in enactment to formulate the problem, while in Data Collection where AI helps to source data. It then goes to Analysis; Here AI capacity to identify relationships and Selection of relevant data by humans is combined. In Design and Prototyping, the human factor is the central element, with AI recommendations for improvement. The Selection element of the Framework becomes important here as a human – not computers – decides on the best solutions. During the Evaluation phase of the process, AI and human sense-making are struggling to retain the results of the work, using the Retention principle. Lastly, the process gets into the Iteration phase, where it is learned that the lessons must be taken through successive cycles. The shape of this flowchart also suggests the iterative nature of the design process and indicates that human sense-making and AI are two synergistic forces that can improve design decisions.
Table 3: The Key Principles of Sense-making in AI-Enabled Design
At this point, it expands Karl Weick’s work on sense-making while outlining the key concepts of the role of AI in design based on the five concepts of social theories of organisations. There are two main extensions to understand – Social Context and Ongoing Iteration. Earlier principles are concerned with sense-making among actors in organisations; the use of AI brings new factors into a system, with which the principles should be approached from a multi-perspective view. In the Social Context, Weick brings a focus on the meaning of information within organisational interactions. Nevertheless, in AI-enabled design, this increases in a way that includes the interaction between humans and the AI and the design team. Interactions are essential in these settings because teams decode the AI-produced perceptions and infuse social parameters as well.
It is necessary in AI-enabled design, although Weick did not mention Ongoing Iteration explicitly, despite this observation. This principle posits the idea that AI designs are designed to be cyclic, for, with feedback between mankind and robots, the decision-making process is improved cycle after cycle. Such a relationship of feedback loops thus supports continuously incremental learning from the human input and AI-generated outputs of designs. Altogether, these principles transpose a model of sense-making that reflects the intricate design contexts of the contemporary world. This contrasts with viewing interactions as information interpreted through a social-technological lens to include the learning processes that form part of the collaborative construction of what design reproduces (Weick, 1995; Ancona, 2011; Matlis & Christianson, 2022).
IMPLEMENTATION AND CASE STUDIES
The Comparative Case Studies of Human-AI Integration in Design exemplify how AI optimises decision-making, creativity and productivity in industries. Each of them has elements of core sense-making theory elements such as enactment, selection, and retention, supported by social context and iteration. These examples illustrate how the human touch, combined with machine learning and algorithms, develops new and sustainable methods of working. The case studies presented also discussed issues like rationality and intuition, and ethical issues.
In total, they offer real-world examples of how sense-making theory applies to AI in design contexts, including rigid, positive, and varied strategies that are helpful for current design work.
Table 5: Comparative Case Studies of Human-AI Integration in Design
Comparing the case studies of Human-AI integration in design is useful in demonstrating both the potential benefits of integrating human creativity into AI and the problem that arises from this integration. In Dreamcatcher, (2021) affair of Autodesk, generative design software permits the planner to set goals, and AI deluge design choices lowering time in design cycles and increasing creativity. However, designers have a significant issue in understanding advanced AI-created models. In their Project Magenta (2020), Google Research investigates how Artificial Intelligence can work alongside humans to build music, but while humans can create music, AI adds the valence. Likewise, IBM Watson (2022) advances product design by integrating AI to identify consumer choices and plan effectively; however, it hampers designers’ creativity because of dependency on AI. Nike’s Flyknit shoe design (2021) applied AI to optimally reduce the material waste for environmental conservation purposes, although such steps are difficult to harmonise with the beauty of sustainability. Finally, in Stanford’s Augmented Creativity Lab (2023), AI support is used in architectural design, and questions regarding the ethics of choices made by AI systems arise. These examples confirm how AI brings huge effectiveness and creativity leeway into designers’ work; at the same time, it unveils the human abilities to manage them to guarantee emotional and ethical design outputs. This paper explains that human-AI integration is desirable for design but calls for a balance between the accuracy that comes with AI implementation and creative thinking from the designers.
COMPARATIVE ANALYSIS
Comparative Analysis of Human Sense-Making and AI Algorithmic Thinking
Both human sense-making and AI algorithmic thinking present instrumental values through their strengths in design practices, while they also present their weaknesses. Human feelings enchanted morality is important, but bias, limited by data processing. AI performs best on definite quantitative tasks, attempting massive-scale data analysis, while it is weak on context and creativity. The following strengths are proposed in the adopted conceptual model: The main strengths of this concept are that it will bring together human problem definition, creativity, contextual understanding, processing abilities, data treatment, speed, and computational strength of AI systems. This cycle improves decision-making in design by integrating the intelligence of human decisions and Artificial Intelligence, hence creating innovative, socially sensitive and efficient design solutions.
Figure 5: Comparative Analysis of Human Sense-making vs. AI Algorithmic Thinking
Since Human Sense-making and AI Algorithmic Thinking both have a few similarities and differences, they can be presented in the form of a Venn diagram. Humans’ processes and approaches to cognition are learning-based, relative, synthetic, irreducible, creative, and ethical, as the solutionists require. While AI shines at storing data, finding data-based patterns, and making data-driven conclusions, it fails at context, feeling, and morality. AI processing is fast and precise when integrated into its abilities to compensate for human imagination as well as contextual understanding. Combined, these approaches advance design by integrating AI’s data processing with human choice and decision-making while weighing the moral implications of their solutions to offer improved creative work.
In design practices, both human-centric sense-making and AI algorithmic logic are advantageous and have their disadvantages. Human mental processing power elicits creativity, intuition and rational decision making, but compromises with time constraints and biases. Pro-Active Intelligence is good at ferreting out data, perceiving patterns or pursuing solutions quickly, but has no imagination as well as little moral compass. In this way, the use of both approaches can lead to design solutions which include the efficiencies of data set decision making with the appropriate ethical use of creativity decision making. It ensures that designs are responsible, innovative, and appropriate to the surrounding environment. Recognising this duality is important to design the further evolution of design practices. Here, the comparative table of human glycemic index and AI thinking is presented.
Table 4: Comparative Aspects of human sense-making vs. AI algorithmic thinking:
Key understanding of the relationship between human and AI shown by the Comparative Analysis of Human Sense-making vs. AI Algorithmic Thinking table is that human and AI are collaborators in design practices, not competitors. This table reflects how human sense-making and AI algorithmic thinking are interdependent when it comes to design. Technology is limited in creativity, learning context and can thus not replace human beings in
facets such as identification of problems and decision-making within an ambivalent environment in terms of ethics. However, a human suffers from cognitive biases or gets locked in their thinking and has problems dealing with huge amounts of data. While AI has advantages in handling data, accuracy, and speed but it has weaknesses in creativity and cultural intelligence. These together work synergistically, where AI brings design efficiency with computational intelligence and human interventions providing the ethical and cultural standpoint. It shapes generations of unique and viable solutions in view of local contexts and the pursuit of ethical design goals.
DISCUSSION
The Implications
This study builds upon Weick’s sense-making framework by incorporating AI into modern design practices, underscoring the importance of communication in human-AI collaboration. Beyond the traditional elements of enactment, selection, and retention, communication functions as a vital bridge, ensuring that AI- AI-generated insights are interpreted, ethically implemented, and seamlessly integrated into human decision-making. Unlike previous models that primarily emphasised AI’s computational capabilities, this approach focuses on co-design, where humans and AI collaborate through ongoing dialogue, iterative feedback, and interdisciplinary exchange. Additionally, the study broadens theoretical perspectives by exploring social context, ethical considerations, and emerging trends while proposing a process that facilitates structured information exchange between AI systems and human designers. By doing so, it addresses AI’s limitations in independent reasoning, contextual understanding, and ethical validation of design choices.
Moreover, this study advances discourse in design literature by elucidating the critical role of communication in human-AI collaboration, particularly in fostering ethical, innovative, and contextually appropriate design solutions. The proposed framework is broadly applicable across multiple disciplines, including architecture, product design, and user experience (UX) design, where AI contributes to solution optimisation, structural computations, and data-informed decision-making. However, the efficacy of AI-driven design interventions is contingent upon the clarity with which AI-generated recommendations are communicated, interpreted, and assimilated by human stakeholders. By integrating principles of AI explainability, structured feedback mechanisms, and interdisciplinary collaboration, this research underscores the necessity of transparent communication in ensuring that AI-assisted design processes align with human needs, cultural sensitivities, and ethical imperatives.
Furthermore, AI’s ability to process vast datasets and generate design solutions in real time makes the design process more dynamic than traditional techniques. However, without effective communication mechanisms, these advancements risk becoming misaligned with user expectations, business goals, and regulatory frameworks. The study highlights how communication enhances AI’s role as a design partner, ensuring that AI- AI-generated insights are not just technically efficient but also meaningful, user-centred, and ethically responsible. The structured approach presented in this research ultimately improves design expertise, precision, and consumer satisfaction while reducing time and costs in complex design environments.
RECOMMENDATIONS FOR FUTURE RESEARCH
Although the proposed framework is effective in achieving several design-related objectives, certain limitations affect its applicability across all design disciplines. One key aspect that requires further exploration is the role of communication in AI-driven design decision-making. While AI performs well in fields such as architecture, UX, and product design, its effectiveness in more subjective domains like graphic design and fashion design—where choice is influenced by cultural trends, personal aesthetics, and human emotion—remains uncertain. AI’s lack of contextual intelligence and heuristic creativity often results in functionally acceptable but creatively shallow designs. In such cases, human designers must reinterpret, refine, or even entirely rework AI-generated outputs, underscoring the need for effective human-AI communication and interaction in these design processes.
Future research should explore how communication strategies can bridge the gap between AI-generated insights and human-driven creative processes. This includes studying how AI can better communicate its design rationale, allowing designers to interpret, critique, and adapt AI-assisted outputs more effectively. Additionally, research should examine how explainable AI (XAI) frameworks can improve human-AI collaboration by making AI-generated recommendations more transparent and actionable.
Another key area for future study is the role of interdisciplinary communication in AI-assisted design. As AI tools become more integrated into creative industries, effective collaboration between designers, engineers, data scientists, and business strategists will be essential. Research could investigate the impact of cross-functional communication strategies in ensuring that AI-generated designs align with user expectations, cultural relevance, and ethical standards. Furthermore, issues related to bias, privacy, and cultural sensitivity remain critical challenges in AI-driven design. Studies should focus on how communication can facilitate more inclusive AI models that respect diverse cultural aesthetics and ethical considerations. Research on Natural Language Processing (NLP) for AI-driven design could also provide insights into how AI systems can better understand human design preferences and contextual cues, leading to more adaptive and human-centred AI design frameworks.
Finally, enhancing AI training datasets and refining AI-human interaction models will be crucial for expanding AI’s capabilities in creative fields. Future research should explore how communication theories and cognitive science principles can be integrated into AI design models to ensure that human designers and AI systems work together more effectively in solving complex design challenges.
CONCLUSION
This paper offers a meta-analysis of the integration of sense-making with AI-driven design and presents a framework for AI-human collaboration. Grounded in Weick’s sense-making theory, the study emphasises the importance of social context, iteration, and structured feedback in modern design processes. Key elements such as enactment, selection, and retention contribute to problem definition, data gathering, and design refinement, ultimately enhancing efficiency and innovation. While AI facilitates data-driven design solutions through computation, human designers remain essential in ensuring that these solutions are socially relevant, ethically sound, and creatively meaningful. The proposed framework supports real-time feedback loops that allow AI- AI-generated insights to be continuously refined through human interpretation and contextualization, particularly in architecture, product design, and user interface technology.
Beyond technical integration, this study highlights the critical role of communication in AI-assisted design. Effective human-AI communication bridges the gap between AI’s computational strengths and human creativity, ensuring that AI-driven design decisions align with user expectations, ethical considerations, and cultural relevance. The research also underscores the need for explainability in AI systems, advocating for transparent, interpretable AI outputs that facilitate clear collaboration between AI systems and human designers.
Furthermore, this study expands Weick’s sense-making framework by incorporating structured communication as a key component of human-AI collaboration. It calls for a dynamic approach to design that acknowledges the evolving nature of both AI technologies and human decision-making processes. Ethical concerns such as bias, responsibility, and accountability remain central to the discussion, reinforcing the need for human oversight in AI-driven design. Ensuring that communication strategies are embedded within AI frameworks will be essential for promoting fair, inclusive, and culturally sensitive design solutions. Ultimately, this research calls for a cautious yet strategic deployment of AI in design, recognizing that AI’s potential is maximised when combined with human insight, ethical reasoning, and effective communication practices. By fostering transparent, explainable, and collaborative AI systems, organizations can ensure that AI-driven design remains not just efficient, but also socially responsible and creatively enriched.
REFERENCES
- Adegbola, A. E., Adegbola, M. D., Amajuoyi, P., Benjamin, L. B., & Adeusi, K. B. (2024). Fostering product development efficiency through cross-functional team leadership: Insights and strategies from industry experts. International Journal of Management & Entrepreneurship Research, 6(5), 1733–1753.
- Amrollahi, A., & Ghapanchi, A. H. (2021). AI-enhanced design: The interplay between AI and human designers. Design Studies, 73, 101019. https://doi.org/10.1016/j.destud.2021.101019
- Ancona, D. (2011). Leadership in an era of uncertainty. The Leadership Quarterly, 22(1), 8–17. https://doi.org/10.1016/j.leaqua.2010.12.001
- Ancona, D. (2012). Sensemaking: Framing and acting in the unknown. In D. A. Whetten & A. E. Melnyk (Eds.), The handbook of organizational theory and management: Methodological approaches (pp. 91–124). Taylor & Francis.
- Ancona, D., Goodman, P. S., Lawrence, B. S., & Tushman, M. L. (2020). Connections: The power of networked leadership. Harvard Business Review Press.
- Attah, R. U., Garba, B. M. P., Gil-Ozoudeh, I., & Iwuanyanwu, O. (2024). Cross-functional team dynamics in technology management: A comprehensive review of efficiency and innovation enhancement. Engineering Science and Technology Journal, 5(12), 3248–3265.
- Autodesk. (2021). Dreamcatcher: Generative design software. https://www.autodesk.com/solutions/generative-design
- Bader, V., & Kaiser, S. (2019). Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by artificial intelligence. Organization, 26(5), 655–672.
- Bevelin, D. (2006). Sensemaking: A decision-making perspective. Organizational Behavior and Human Decision Processes, 100(1), 59–73. https://doi.org/10.1016/j.obhdp.2006.04.001
- Berdahl, J. L., Cooper, M., Murray, R., & Thompson, L. (2022). The automation of design: Implications for innovation and creativity. Research Policy, 51(1), 104439. https://doi.org/10.1016/j.respol.2021.104439
- Brown, T. (2009). Change by design: How design thinking creates new alternatives for business and society. HarperBusiness.
- Chander, B., John, C., Warrier, L., & Gopalakrishnan, K. (2024). Toward trustworthy artificial intelligence (TAI) in the context of explainability and robustness. ACM Computing Surveys.
- Chennam, K. K., Mudrakola, S., Maheswari, V. U., Aluvalu, R., & Rao, K. G. (2022). Black box models for explainable artificial intelligence. In Explainable AI: Foundations, Methodologies and Applications (pp. 1–24). Springer.
- Dell’Era, C., Frattini, F., & Aversa, P. (2020). The Empathy-Abduction-Experiment (EAE) model: Fostering innovation in design thinking. Journal of Product Innovation Management, 37(4), 319–337. https://doi.org/10.1111/jpim.12532
- Dougherty, D. (2020). Sense-making: The process of learning from experience. Academy of Management Perspectives, 34(2), 148–162. https://doi.org/10.5465/amp.2019.0116
- Elsbach, K. D., & Stigliani, I. (2018). Designing and implementing the hybrid workspace: Understanding the role of space in the social dynamics of organizations. Journal of Business Research, 87, 118–125. https://doi.org/10.1016/j.jbusres.2017.02.010
- Floridi, L. (2020). The ethics of artificial intelligence: A framework for responsible AI. Oxford University Press.
- Grassucci, E., Park, J., Barbarossa, S., Kim, S.-L., Choi, J., & Comminiello, D. (2024). Generative AI meets semantic communication: Evolution and revolution of communication tasks. arXiv Preprint, arXiv:2401.06803.
- Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). Explainable artificial intelligence (XAI). In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data) (pp. 155–164). IEEE.
- Hassani, M., et al. (2022). AI and creativity: The need for human cognition in design processes. Creativity Research Journal, 34(3), 269–283. https://doi.org/10.1080/10400419.2022.2086592
- Huang, Z., & Zhang, Z. (2022). Real-time analytics for competitive advantage in the financial and healthcare industries. Journal of Business Analytics, 5(2), 130–145. https://doi.org/10.1080/25732338.2022.2080965
- Jarrahi, M. H., & Eshraghian, K. (2019). AI in knowledge work: Automating routines while augmenting expertise. Journal of Information Technology, 34(4), 289–304. https://doi.org/10.1177/0268396219855293
- Johne, S., Müller, T., & Reichelt, L. (2020). The importance of human judgment in AI-driven analytics: A framework for understanding decision-making. International Journal of Information Management, 51, 102029. https://doi.org/10.1016/j.ijinfomgt.2019.102029
- Keswani, V., Mohan, M., Balakrishnan, S., Kesavan, S. K., & Rajagopal, A. (2021). Decision deferral to multiple human decision-makers in human-AI collaboration. IEEE Transactions on Cybernetics, 51(11), 5487–5499. https://doi.org/10.1109/TCYB.2021.3074889
- Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2017). How artificial intelligence will change the future of work. MIT Sloan Management Review, 58(1), 24–35.
- Kolko, J. (2015). Design thinking comes of age. Harvard Business Review, 93(9), 66–71.
- Liedtka, J. (2015). Why design thinking works. Harvard Business Review, 93(9), 72–79.
- Ligon, G. S., Hunter, S. T., & Mumford, M. D. (2020). The role of artificial intelligence in design: Implications for organizational creativity and innovation. Journal of Business Research, 121, 107–118. https://doi.org/10.1016/j.jbusres.2020.07.008
- Madsbjerg, C. (2018). Sensemaking: The power of the human insight in a data-driven world. Harvard Business Review Press.
- Matlis, S. J., & Christianson, M. K. (2022). Sensemaking in leadership: Creating change and fostering creativity in uncertainty. The Leadership Quarterly, 33(2), 101–118. https://doi.org/10.1016/j.leaqua.2021.101118
- Miller, T. (2022). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2022.103494
- Mohammed, B. (2024). The synergy of explainable AI and learning analytics in shaping educational insights. IAENG International Journal of Computer Science, 51(9).
- Nike. (2021). Flyknit: A sustainable approach to shoe design using AI.
- Nedeltcheva, G. N., & Shoikova, E. (2017). Coupling design thinking, user experience design and agile: Towards cooperation framework. In Proceedings of the International Conference on Big Data and Internet of Thing (pp. 225–229).
- Papachristos, E., Johansen, P. S., Jacobsen, R. M., Bysted, L. B., & Skov, M. B. (2021). How do people perceive the role of AI in human-AI collaboration to solve everyday tasks? In Proceedings of the ACM Greek SIGCHI Chapter (pp. 1–6). https://doi.org/10.1145/3461741.3462189
- Seeber, I., Bittner, E., Briggs, R. O., De Vreede, G.-J., De Vreede, T., Elkins, A. M., … & Maier, R. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174
- Shollo, A., & Galliers, R. D. (2024). Constructing actionable insights: The missing link between data, artificial intelligence, and organizational decision-making. In Research Handbook on Artificial Intelligence and Decision Making in Organizations (pp. 195–213). Edward Elgar Publishing.
- Smith, J., Lee, C., & Park, M. (2021). Enhancing decision-making through AI integration: The human element in analytics. Decision Support Systems, 145, 113547. https://doi.org/10.1016/j.dss.2021.113547
- Song, B., Zhu, Q., & Luo, J. (2024). Human-AI collaboration by design. In Proceedings of the Design Society, 4, 2247–2256. https://doi.org/10.1017/pds.2024.227
- Sreenivasan, A., & Suresh, M. (2024). Design thinking and artificial intelligence: A systematic literature review exploring synergies. International Journal of Innovation Studies.
- Thomas, M., & Lewis, J. (2023). The role of AI in preventing ethical violations: A call for audits. AI & Society, 38(1), 1–14. https://doi.org/10.1007/s10209-022-00884-4
- Verganti, R., Vendraminelli, L., & Lansiti, M. (2020). The role of artificial intelligence in innovation: The impact of AI on the future of design. Research Policy, 49(6), 104089. https://doi.org/10.1016/j.respol.2020.104089
- Verganti, R., Vendraminelli, L., & Lansiti, M. (2020a). Design as a driver of innovation in the digital age: Implications for organizational behavior. Journal of Business Research, 121, 559–564. https://doi.org/10.1016/j.jbusres.2020.06.018
- Von Eschenbach, W. J. (2021). Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607–1622.
- Vössing, M., Kühl, N., Lind, M., & Satzger, G. (2022). Designing transparency for effective human-AI collaboration. Information Systems Frontiers, 24(3), 877–895.
- Weick, K. E. (1995). Sensemaking in organizations. Sage Publications.
- Zhang, M., Zhang, X., Chen, Z., Wang, Z., Liu, C., & Park, K. (2024). Charting the path of technology-integrated competence in industrial design during the era of Industry 4.0. Sustainability, 16(2), 751.
APPENDIX
Appendix A: Questionnaire
APPENDIX B
APPENDIX B
Additional Case Study Information