Much research examines how AI can mitigate or anticipate risks [MA21; XU 22; YW 21], notably through
improved forecasting capabilities [KY 21] or through detection mechanisms to limit fraud and financial crimes
[HG 21; QLG 21; SAM 21]. It has also considerably optimized financial risk management, whether market or
credit risks, thanks to the automation of data collection, the construction of predictive models, resilience testing,
or the evaluation of the performance of systems such as credit scoring [JON 21; KAS 21]. Artificial intelligence
tools are also proving valuable in spotting potential risk signals [ARS 21]. Artificial intelligence can also be
used to assess risks and ensure effective monitoring in complex logistics networks, as well as in the prevention
of money laundering [CSK 21; GGB 20]. Collaboration between humans and intelligent systems—in other
words, between human and automated intelligence—tends to produce better outcomes [BHH 21; ZRM 21]. As
with any major innovation, the introduction of emerging technologies generates societal concerns. Fear of what
is new or poorly understood is well documented. Some nations display a higher cultural tolerance for uncertainty
than others [HOF 80; HH 01], and the innovation process remains inherently unpredictable [UZU 20]. Despite
this, the potential for social disruption is a universal issue.
Although general opinion tends to consider that artificial intelligence helps reduce a certain number of risks, it
can also generate new forms of fragility, which is the focus of our reflection. Several categories of threats
associated with the use of AI have been highlighted [CUL 21]. Eric Schmidt, former president of Google,
highlights crucial issues such as algorithmic distortions, inequalities, usage drifts, international conflicts, as well
as current technical limitations [MUR 21]. A relevant case is the unintentional implementation of racial or socio-
economic biases in applications based on artificial intelligence. Furthermore, AI systems rely heavily on the
massive exploitation of data, which they process using advanced computing technologies. This data can serve
purposes of general interest, commercial or societal. For example, some companies are using artificial
intelligence programs to examine their databases to identify consumer habits, brand interactions, and customer
profiles. Some of this information is private, which justifies growing concerns about data privacy. To strike a
balance between privacy and business objectives, the European Union's General Data Protection Regulation
(GDPR) [GDPR 18] stipulates that personal data must be "collected for a specific, explicit, and legitimate
purpose and must not be further used in a way that is incompatible with those purposes." It also requires that
this data be "processed lawfully, fairly, and in a transparent manner for the data subject" (Article 5, [GDPR 18]).
This same article also sets out strict rules regarding the limitation of the amount of data collected and the duration
of its retention.
The overall objective of this article is to analyze the risks inherent in integrating AI into organizations, while
identifying possible management and regulatory levers. The specific objectives are: - Identify the types of risks
(economic, legal, ethical, social) associated with AI. - Evaluate current governance models for technological
innovations. - Propose strategies to mitigate these risks.
Based on this, we pose the following research questions: 1. What are the main risks associated with the use of
artificial intelligence? 2. How do these risks vary across sectors? 3. What analytical frameworks can be used to
assess and prevent these risks? 4. What recommendations can be made for responsible AI governance? We draw
on a multidisciplinary theoretical framework, notably drawing on: - The technological innovation life cycle
model (Rogers, 2003) - Technological risk theory (Beck, 1992) - Approaches to algorithmic ethics and AI
governance (Floridi, 2018; Jobin et al., 2019) These models allow us to structure our analysis around an
analytical model integrating the interactions between innovation, risk perception, regulation, and use. The study
adopts a mixed-methods exploratory approach, combining data: - Quantitative, from surveys of technology
company managers. - Qualitative, collected through semi-structured interviews with AI experts, lawyers, and
CSR managers. - Secondary, analyzing institutional reports (OECD, UNESCO, UN), legal texts, and scientific
publications. The conceptual framework articulates the following dimensions: innovation – risk perception –
governance – societal impact. The hypotheses tested include: - H1: The more mature an organization is in its
use of AI, the more it develops risk management mechanisms. - H2: Highly regulated sectors are better at
anticipating ethical risks related to AI. The results show that perceived risks vary greatly depending on the uses
of AI. Companies operating in the healthcare, finance, and security sectors express greater sensitivity to ethical
and legal issues. The majority of organizations do not yet have clear internal policies governing the use of AI,
particularly with regard to algorithm transparency and the processing of sensitive data. The analysis also
highlights a disconnect between rapid innovation and regulatory adaptation, exposing companies to risks related