International Journal of Research and Innovation in Social Science

Submission Deadline-17th December 2024
Last Issue of 2024 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th January 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th December 2024
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Decoding Intelligence in Artificial Intelligence: Tracing Historical Evolution, Analysing Definitions, and Regulatory Challenges

Decoding Intelligence in Artificial Intelligence: Tracing Historical Evolution, Analysing Definitions, and Regulatory Challenges

Elizaveta Filina

Doctoral Fellow, University of Debrecen, Hungary

Géza Marton Doctoral School of Legal

DOI: https://dx.doi.org/10.47772/IJRISS.2024.801137

Received: 29 December 2023; Revised: 08 January 2024; Accepted: 11 January 2024; Published: 14 February 2024

ABSTRACT

The text provides a detailed exploration of artificial intelligence (AI), spanning its historical development, types based on capabilities and functionalities, achievements, and challenges in defining the term. The discussion encompasses the limitations of current AI, particularly in achieving human-like intelligence. The text addresses the absence of a universally accepted definition for AI and explores various definitions from influential figures in the field. It emphasises the challenges in regulating AI due to the lack of consensus on its definition. The narrative concludes by underscoring the necessity of clear definitions for effective regulation amid the increasing integration of AI into diverse sectors.

Keywords: artificial Intelligence (AI), historical development of AI, types of AI, intelligence, definitions of AI.

INTRODUCTION

In recent years, we have witnessed the rapid development and widespread implementation of artificial intelligence technologies against the backdrop of an active process of digitization. Artificial intelligence has become a key participant in the national strategies of various countries and a major player at the largest international conferences. Artificial intelligence also came into various aspects of our daily lives. Ranging from the seamless integration of AI systems in music and social media platforms that curate personalised experiences, ads and products, to the advent of virtual assistants like Siri and Alexa. The application of artificial intelligence provides humanity with tremendous opportunities, yet it is accompanied by new challenges and threats. All of this opens up broad prospects for society but also creates significant problems that we will have to address in the near future.

Therefore, it is necessary to understand how we can avoid risks, control, and create a legal framework. Thus, first of all we need to root to the real basics – to the correct use and understanding of certain terms.

A broad understanding of the definition of “artificial intelligence”, carries the risks of overload and multiple interpretations of the term. Moreover, in our opinion, the definition of “artificial intelligence” does not always correspond to the real concept of the word “intelligence” and existing technologies. In this case, in this paper we ask the questions, is it appropriate to apply the term “artificial intelligence”, for example, to simple systems?

THE PURPOSE OF STUDY

This study gives an overview and contributes to a comprehensive analysis and synthesis of existing definitions of intelligence, various types of AI, and challenges in defining and regulating AI.

The paper also proposes a definition of AI that seeks to encompass both weak and strong AI, acknowledging the current limitations of AI technologies while leaving room for future advancements.

In this paper, the significance lies in synthesising existing knowledge, providing a fresh perspective, and addressing current gaps or challenges. Therefore, this paper contributes to the ongoing discourse on intelligence and AI, offering a well-structured and comprehensive overview.

BRIEF HISTORY OF DEVELOPMENT OF AI.

Research in the field of artificial intelligence began its development in the middle of the last century. In the 1930s, Alan Turing introduced the idea of creating programmable devices capable of solving specific tasks. Far ahead of his time, he discussed a concept that laid the foundation for future technological achievements.

In 1950, Turing published a work titled “Computing Machinery and Intelligence,” where he first proposed a test known as the “Turing Test.”

Turing suggested that the question of machine thinking is too vague, but if we focus on a specific game, ‘The Imitation Game,’ involving a digital computer, we have a more precise and discussable question. Turing believed that digital computers could excel in this game.

‘The Turing Test’ is also used more broadly to describe behavioural tests for intelligence in entities assumed to have a mind. Some argue that Descartes’ Discourse on the Method prefigures The Turing Test, and there are claims of foreshadowing in Descartes’ work.

Additionally, ‘The Turing Test’ is employed to refer to behavioural conditions for mind or intelligence. For instance, Ned Block’s ‘Blockhead’ thought experiment challenges The Turing Test. The idea is that an entity could pass behavioural tests but lack true intelligence. [1]

This test generalises the idea that a machine can be considered intelligent if a person interacts with it and another person cannot determine which one they are interacting with—a machine or a human.

The Turing Test has become a fundamental question in the field of artificial intelligence and is a key element in determining the degree of rationality of artificial systems. Despite, that this approach, proposed by the scientist, was criticised by philosophers, the technique still predetermined the pragmatic approach that is still used regarding AI.  This concept has paved the way for research in creating machines capable of mimicking human intelligence and has had a significant impact on the development of technologies in the subsequent decades.

In 1943 American scientists – neurophysiologists and one of the foundations of cybernetics W. McCulloch and neurolinguist, logician and mathematician W. Pitts – in the scientific joint work “Logical calculus of ideas associated with nervous activity” for the first time proposed a mathematical model of an artificial neural network. It was these results that showed that scientific research laid the foundations for the development of AI and the revolutionary concept of the human brain as a computer.[2]

But officially the definition of “Artificial Intelligence” appeared in 1956.[3] The computer scientist John McCarthy created the term “artificial intelligence” at a 1956 conference held at Dartmouth College. This conference is considered the inception of AI as a cross-disciplinary research field. McCarthy introduced the phrase “artificial intelligence,” expressing that it symbolically marked the beginning of an idea and set the course for the Dartmouth Summer Research Project.[4]

John McCarthy stated that Artificial Intelligence “is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”[5]

Initially, a neurocybernetic approach was used in the development of AI systems. It was in virtue of the theory of W. McCulloch and W. Pitts that the first computer model of brain perception, as it is also called the cybernetic model of the brain, appeared. In 1957, it was created by Franko Rosenblatto, and then on its basis, in 1957, he created the world’s first neurocomputer, Mark-1.

This approach involved modelling brain-like structures, that is, building an artificial neural network and training it to solve intellectual problems. Also, the idea of the neurocybernetic approach was implied in the fact that the human brain is capable of thinking, and, accordingly, any “intelligent” device should be created in the image of the human brain, copying its structure and principle of operation.

But there was also an alternative approach that proposed building a model of how a person reasons, thinks, and makes logical conclusions. This approach does not focus on or attempt to model the structure of the human brain.

Despite the fact that many may believe that a breakthrough in artificial intelligence has only happened now, in fact, research in this area has been going on for more than half a century. And the development of AI has already had both great breakthroughs and stagnations.

For example, in the mid-70s and late 80s, high expectations were not met, which led to a reduction in funding and a decline in interest in AI in general.

After the big breakthrough in 1957, when machine learning algorithms also improved and people became more aware of which algorithm to apply to their problem. And early demonstrations such as Ne well and Simon’s General Issue Solver[6] and Joseph Weizenbaum’s ELIZA showed promising results in problem solving and spoken language interpretation, respectively.[7] And scientists have convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at multiple institutions. The government was particularly interested in a machine that could transcribe and translate spoken language, as well as perform high-throughput data processing. But the problem was that both the government and scientists had too high expectations. In 1970, Marvin Minsky told Life Magazine: “In three to eight years we will have a machine with the general intelligence of the average person.”[8]

In reality, much was not feasible at that time. And one of the factors was the insufficient power of computers at that time. In 1976, Hans Moravec argued that computers were still millions of times too weak to demonstrate intelligence. He offered an analogy: artificial intelligence requires computer power the same way airplanes require horsepower. Below a certain threshold this is not possible, but as power increases it can eventually become easy.[9]

Despite this, since 1980 there has been a so-called “boom”. The resurgence of interest in AI was driven by two main factors: the expansion of algorithmic tools and increased funding. John Hopfield and David Rumelhart became famous for promoting “deep learning” techniques that allow computers to learn from experience. At the same time, Edward Feigenbaum introduced expert systems designed to copy the decision-making processes of human experts. These systems involved consultation with experts in a particular field to obtain recommendations for specific situations. Once experience is gained in various scenarios, non-experts will be able to benefit from the advice provided by the program.[10] Expert systems have found wide application in various industries. MYCIN, created in the 1970s, was an early example of an expert system designed for medical diagnostics.[11] Later the global enterprises initiated the widespread adoption of expert systems, catalyzing a transformative era in the evolution of artificial intelligence. Concurrently, the exploration of ‘knowledge-based reasoning’ emerged as a pivotal focus within the overarching field of AI research.

Additionally, in the 1980s, the Japanese Government launched the Fifth Generation Computer Systems project, with an emphasis on AI and parallel processing. This initiative aimed to develop advanced computing technologies, including AI-based systems.[12]

In 1982, David Marr had a profound impact on cognitive science.[13] Subsequently, in 1986, Brooks proposed a new approach to the structure and understanding of intelligence, different from symbolic artificial intelligence. His contribution was manifested in the creation of behavioural robotics, which has become a significant direction in the development of artificial intelligence.[14]

Despite its success, AI development experienced another “winter” from 1987–1993. According to HP Newquist, editor and publisher of Artificial Intelligence Trends, this was due to the failure of many companies to create a general impression that the technology was not viable.[15]

In other points, the difficulty of maintaining and upgrading the expert system was also one of the reasons for slowing down the development of AI.[16]

The impressive victories of artificial intelligence over humans attracted everyone’s attention: in 1997, the computer Deep Blue defeated chess champion Garry Kasparov.[17] Deep blue could process 200 million possible moves per second and determine the optimal next move looking 20 moves ahead.[18] For the computer technology prevalent during that period, this outcome represented a significant breakthrough and “served as a huge step towards an artificially intelligent decision making program.”[19]

In 1998, Tim Berners-Lee proposed the semantic web, a semantics-based knowledge network or knowledge representation. Its essence is to add machine-readable metadata to documents on the World Wide Web (such as HTML). The entire Internet is becoming a universal information exchange medium.[20]

Since the end of 1990 s the development of AI was growing and It appeared that machines were adept at addressing a myriad of challenges, including the nuanced realm of human emotions. This was exemplified by Kismet, a robot engineered by Cynthia Breazeal, showcasing the capability to discern and express emotions.

The third wave of AI commenced with the introduction of deep learning, significantly accelerating societal development. The pivotal breakthrough in overcoming ImageNet challenges in 2012 marked the recent rise of deep learning. The widespread incorporation of deep learning signifies a significant victory for connectionist approaches, establishing AI ubiquitously in modern contexts.[21]

At the same time, the world continued to watch how AI gradually achieved greater successes and breakthroughs: in 2011, the supercomputer Watson from IBM beats the world champion in general erudition on a TV quiz show; in 2016, AlphaGo beats a top-ranking professional player from South Korea in the game of Go. And in 2021, several natural language processing models from Baidu, Google and Microsoft outperformed humans in SuperGlue text comprehension tests – the AI achieved 90.8%.[22]

Due to significant progress in deep learning since 2015, artificial intelligence is now considered to be in a ‘golden age’ resurgence. In 2018, scientists Yoshua Benji, Geoffrey Hinton, and Yann LeCun were honored with the Turing Award,[23] recognized as pioneers in the field of deep learning. Their approach has greatly advanced critical areas like computer vision and speech recognition. Many modern AI technologies, from self-driving cars to medical devices with AI features, are built on the principles of deep learning.

Thus, artificial intelligence is progressively permeating diverse sectors such as medicine, energy, communications, urban management, and transportation, seamlessly integrating into various aspects of daily life. Its application extends to critical areas like healthcare, social security, justice, and law enforcement.

As can be seen from the analysis of the historical development of AI, at this stage we continue to observe its “golden age”. In modern realities, technological progress covers almost all spheres of human life, and the success of countries in economics and development is determined, among other things, by technological progress. Currently, governments and private stakeholders are dedicating huge investments to research and further development of AI. But as technologies grow and develop, so do the risks associated with their use. Here it is worth mentioning once again the need to foresee, control, and legislate these risks. And for this, a complete and comprehensive understanding of the definitions and concepts of the subject we are considering is necessary.

ANALYSIS OF DEFINITIONS OF “INTELLIGENCE”.

Before analysing the concepts of Aİ it is necessary to understand the basic definitions. Therefore, let us start from the understanding of the definition “Intelligence”.

Newell and Simons by “general intelligent action” we wish to indicate the same scope of intelligence as we see in human action: that in any real situation behaviour appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some limits of speed and complexity.

The American Heritage Dictionary explains “intelligence” as “the ability to acquire, understand, and use knowledge”.[24]

“AllWords” dictionary claims that “intelligence” is “Capacity of mind, especially to understand principles, truths, facts or meanings, acquire knowledge, and apply it to practise; the ability to learn and comprehend”.[25]

Cambridge English Dictionary defines “intelligence” as “the ability to learn, understand, and make judgments or have opinions that are based on reason”.[26]

Within Wikipedia, one can find the following definition of “intelligence”: “Intelligence is characterised by the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. This multifaceted concept can be delineated as the ability to perceive or infer information and subsequently retain it as knowledge for application in adaptive behaviours within a given environment or context.”[27]

Some psychologists define “intelligence” as a in terms of judgement, practical sense, initiative, and adaptability.[28]

Another viewpoint posits intelligence as the aggregate or global capacity of an individual to act purposefully, think rationally, and effectively engage with their environment.[29]

The synthesis of diverse definitions highlights commonalities in the attributes of learning, understanding, reasoning, self-awareness, and the practical application of knowledge within the concept of intelligence. When addressing the definition of “artificial intelligence,” it is reasonable to extend all of these qualities to machines? We need to ask if nowadays machines can  have human-like qualities. To understand that, and make any conclusion, firstly, we need to see how the AI technology researchers identify several “types of AI”, even before we will start to analyse full definitions of AI.

TYPES OF AI.

Nowadays we can define the types of AI based on capabilities, and the type of AI based on functionalities.

And, when we are talking about types of AI based on capabilities, we can define: narrow AI, general AI and super AI.

The narrow AI is  also called “Weak AI” designed for specific and narrow tasks. It targets a single subset of cognitive abilities and advances in that spectrum, and cannot perform beyond its limitations. The real examples of the weak AI are very known by people, and used in everyday life, such as Syri, Google translate, face recognition systems, and even Chat GPT[30].

At the same time, weak AI, despite its name, can surpass humans in intellectual abilities when performing its specific tasks. Therefore, the use of the wording “weak” here is relevant precisely in the context of comparing such systems with “strong” ones, rather than to characterise them as such – they are by no means weak. But at the same time, is it a real “intelligence” since we cannot apply all of the qualities that are applicable to human “intelligence”?

Here it is also worth noting that at the moment created systems and programs continue to pass the Turing test, which is used to assess the level of development of artificial intelligence. The test involves human-computer-human interaction in order to determine with whom he is conducting a dialogue. However, the test has limitations because the program can successfully imitate human behavior but lack true intelligence. The idea is illustrated by John Searle’s Chinese Room experiment, where a person who does not know Chinese interacts with a language expert, but remains a performer without understanding the process. Thus, the Turing Test is not conclusive proof that a machine has true intelligence.

And, that is, “weak AI” in its essence is not “intelligence” in the usual sense, but a set of algorithms prescribed in advance to perform simple specific tasks such as how to get from point A to point B. But this does not endow these systems with algorithms with real intelligence, with the ability to solve diverse problems, think, feel and reflect, as real human intelligence can do.

Nowadays, only narrow-spectrum AI has been mastered; these are the types of systems we see everywhere and using in our everyday live.

The general AI is also known as a strong AI is a theoretical concept used to describe a specific approach to the development of artificial intelligence.

John Searle, the American philosopher who introduced the definition of “strong artificial intelligence” defined its goal as not only the ability to successfully pass the Turing test, but also noted that such a program would not just be a model of the mind; she, in the literal sense of the word, herself will be the mind, in the same sense in which the human mind is the mind.

Creating strong artificial intelligence means giving a machine intelligence comparable to humans, including self-awareness, the ability to solve problems, learn and plan for the future. This goal implies that an artificial intelligence machine, like a child, will learn skills through input and experience, continually improving its abilities over time.

Thus, strong AI will be able to understand and learn any intellectual task that a human being can. It is adaptable, flexible, and capable of learning from various domains. It is also an AI that will be capable of solving many tasks and problems in a variety of ways without any human intervention and will be completely autonomous.

That will be a real “intellect” in the way that we are defining this meaning. But the main point here, that this strong AI is still just a theoretical concept, and such AI does not exist until now.

It is believed that strong AI is true AI, a thinking machine and, in fact, the original goal of AI development. While, some scientists say that such AI will never be created at all,  N. Bostrom, Oxford philosopher, also identifies a more powerful AI “superintelligence”, which can surpass any person by increasing the power and extremely rapid self-improvement of the machine[.31] Bostrom defines super-AI as: any intelligence that significantly exceeds human cognitive abilities in almost all areas. Super AI will surpass humans in every aspect – from creativity to life wisdom and problem solving.

Thus, super AI remains as yet unfeasible in representing the theory to life, since in terms of technological development we are not even approaching real strong AI. Currently, we can only see examples of Super II in science fiction films. And, most researchers, based on the current state of affairs, consider such scenarios unrealistic.

The types of AI based on functionality:

– reactive machines that perceive the environment and give a response, but which are not able to accumulate experience, form memories and make decisions based on existing experience;

– systems with limited memory that are able to take into account: observations, some accumulated experience and information;

– intelligent systems that can have their own ideas about the world, other agents and entities;

– self-aware systems that can be aware of themselves and form an idea of themselves.[32]

Thus, the field of artificial intelligence includes a diverse range of capabilities and functions. While advances in narrow AI are clear, the realisation of true general AI and superintelligence remains beyond our capabilities.

At the same time, having analysed the types of AI, we can hypothesize that the term “intelligence”, in the understanding familiar to the human mind, cannot be applied to currently existing technologies, since it mostly does not reflect the real content of the concept “intelligence”. As we could see the definition “intelligence” includes such characteristics as understanding, self-awareness, emotional knowledge, reasoning and practical application of knowledge. All these features for now do not reflect the real capabilities of the existing weak systems.  And, it is also evident that contemporary technologies characterised as weak lack “intelligence” in the conventional human understanding. Because in our opinion, the definition “intelligence” could be only applicable to the general AI and super AI which do not exist till now.

But, in order to understand the real nature of the term “artificial intelligence”, it is necessary to analyse the existing definitions of artificial intelligence.

ANALYSIS OF DEFINITIONS OF AI.

At the moment, there are many definitions of AI related to the variety of technologies, functions and application areas.

Alan Turing’s definition would have fallen under the category of “systems that act like humans.”[33] As we can see, Alan Turing’s definition doesn’t reflect the real state of weak systems, since they do not really act like humans. This definition could describe the strong AI.

John McCarthy, the founder of the term “artificial intelligence”, uses it as the work of machines living with the demonstration of the human mind “the science and engineering of making intelligent machines,especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. “.[34] This is also just partly applicable to the weak systems, since the computers currently only partly, with a very framed capabilities are demonstration of the human mind, however McCarthy’s definition is more wide, and closely applicable to the reality in the way that it can be understood as a science that is creating intelligent machines.

While Marvin Minsky defined “artificial intelligence” as “the science of making machines do things that would require intelligence if done by men”.[35] Same point here, this definition is right on the way to describe AI as a science of making, machines do things that require intelligence when done by humans, but the same point here as in the previous definition, that for now we still didn’t create machines that are able to do the same intelligent things as they were done by man. This definition is applicable to science, but is not to the narrow systems to define their intelligence.

According to N. Nilsson, “AI in a broad sense is the intelligent behavior of artificially created objects, which, in turn, includes perception, reasoning, interaction and action in complex environments”.[36] He also suggested replacing the Turing test with something he will call the ‘employment test.’To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines”.[37] This is a very full and great description of the strong AI, but also isn’t applicable to the weak systems.

According to the Council of Europe glossary AI is “A set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Current developments aim to be able to entrust a machine with complex tasks previously delegated to a human.”[38] Here, we want to point, that even if the definition is more likely suitable for the current realities, still would we ever talk about the existence of cognitive functions of the machines? As for now systems are able to use only one specific function as data analysis, speech, or use of memory, but those are very limited and aren’t able to use cognitive flexibility, imagination, speech, ability to reason logically, perceive information through the senses. Therefore is it reasonable to put into the definition a goal that most likely isn’t achievable for now? For the existing realities maybe it would be better to correct it as a “A set of sciences, theories and techniques whose purpose is to reproduce or copy by a machine certain (limited) cognitive abilities of a human being, such as speech, memory, data analysing, problem solving.

The American Heritage Dictionary defines AI as “The ability of a computer or other machine to perform those activities that are normally thought to require intelligence.”.[39] As we analysed before, the existing nowadays machines are not reflecting the real intelligence, therefore this definition is very suitable for the strong and super AI, in our opinion.

The Cambridge Dictionaryin this case is one of the most suitable definition from the previous ones “artificial intelligence” as “the use or study of computer systems or machines that have some of the qualities that the human brain has, such as the ability to interpret and produce language in a way that seems human, recognize or create images, solve problems, and learn from data supplied to them”.[40]

There are still a huge number of different definitions of AI. And, having analysed some of them, we can once again be convinced that the content of many of them does not correspond to today’s technologies.

At this point in time, machines cannot sense or be aware of themselves. They are not capable of having “intelligence” in the way we humans are defining this term. If memories are loaded into a computer, for artificial intelligence (AI) it will simply be data, based on the analysis of which it will identify patterns to obtain results. Unlike humans, whose memories are associated with sensations, emotions and awareness, AI works with algorithms and data that lack a deep understanding of the essence of the processes.

Current development of AI technology is focused on improving its functions to achieve human goals. Although AI is perceived as a human tool to improve life, it is becoming increasingly advanced, surpassing humans in processing information, speeding up processes, and performing complex tasks. But only in terms of the tool, but not in terms of intelligent maschine.

The widespread “hype” that grabbed people with the advent of, for example, the GPT chat, or other programs generating various kinds of tasks, and gave grounds for calling it artificial intelligence, is unfair in our opinion. Since melon systems inherently do not have intelligence in the usual sense. By analysing the existing definitions of artificial intelligence and comparing them with the concept of intelligence, we can once again be convinced that technological progress has not yet been able to achieve real artificial intelligence, in understanding how the word intelligence is interpreted in its original understanding.

Therefore after we made a review of the general definitions of “intelligence”, “artificial intelligence” and an understanding of the types of AI, we came to the point that existing definitions are not defined or not including the aspect of narrow systems, because in our opinion they do not reflect the “intelligence”. Thus, we propose our own definition that will more fully and clearly describe not only the capabilities of strong and super strong AI, but also make allowances for existing weak systems. Therefore, the general wide definition of AI  is a set of sciences, methods of cognition and creation of systems and machines existing on the basis of hardware, software or other forms, partially reproducing or copying some functions related to defining human intelligence  in a limited way, through processing the input data, forming reasoning, interacting with the environment and learning , with the future possible ability to fully perform or surpass the intellectual functions of a human.

REGULATORY CHALLENGES

Despite the fact that we made a more wide and full definition of AI, we are still facing the issue of non-existence of a generally accepted universal definition of this concept. In our opinion the lack of a unified legal approach to the characteristics of AI could create future difficulties in developing effective regulations. Clearly defined definitions of key concepts are key when developing regulation. The definition of AI faces a lack of international agreement, making it difficult to create a global regulatory approach.

A definition that is too broad can blur regulation, while a definition that is too narrow can exclude some technologies. The correct formulation, including all the necessary elements, allows you to clearly define the subject, object, methods, scope and properties of AI technologies, such as reliability, security and transparency.

The adoption of a unified international concept of artificial intelligence is a complex and multifaceted process that requires concerted efforts among various countries, organisations and public groups.

To do this, it is necessary to constantly maintain international dialogue, not only at the level of the European Union, but also at the global level. It is necessary to create international working groups that will deal with aspects of AI, such as ethics, safety, responsibility, transparency and standardisation. Additionally, it is necessary to exchange existing experience in this area between the EU, USA, England, China and other countries.

Further, in our opinion, it is important to develop uniform international standards to define the basic terms, principles and technical aspects of AI. This can be done through participation in international organisations such as ISO.

All this will help create international agreements and treaties between countries in order to establish common principles and standards in the field of AI.

Thus, the adoption of a unified international vision for AI requires long-term efforts and cooperation at the global level. It is important to take into account the diversity of cultures, legal systems and ethical standards of different countries in order to develop a concept that can satisfy a wide range of interests and values.

But this is not the only way of development in this direction. Some researchers have already advised the idea of a functional approach. And, in our opinion, the concept of creating an international regulatory framework based on the functions of AI will allow for more flexible consideration of differences in definitions and regulation between countries, as well as facilitate effective adaptation to the rapidly changing nature and development of artificial intelligence.

From a functional perspective, we could look at categories such as narrow AI (specific to specific tasks), general AI (more adaptable and capable), and super AI (extremely advanced with human-like capabilities).

Additionally, it is necessary to establish regulatory criteria based on the functions that AI systems perform, rather than on the definition of AI in general. For example, regulations may address aspects such as data processing, transparent decision-making, adaptability and learning capabilities.

Gradually, we would be able to understand the current limitations in creating strong AI and superintelligence. And, develop rules that recognize the current state of artificial intelligence technologies while remaining adaptable to future advances.

A very important point here will also be the use of flexible legal language that takes into account technological advances and does not require constant changes. Regulations need to be developed that set broad principles and standards and can be interpreted and adapted as AI technologies evolve.

And also, international cooperation in this area is urgently needed to create a harmonized international structure. Here it would be necessary to establish a common set of functional criteria that could be universally accepted, taking into account cultural and regional differences.

CONCLUSION

In recent years, the rapid integration of artificial intelligence (AI) into various faces of society, coupled with its pivotal role in national strategies and international conferences, highlights both its tremendous opportunities and the emergence of new challenges; the historical development of AI, from Alan Turing’s groundbreaking ideas to the current “golden age,” underscores the transformative journey of AI, emphasising the need for a nuanced and comprehensive understanding of its definitions and concepts to guide effective risk management, control, and legislative frameworks.

The analysis of definitions of “intelligence” and the subsequent examination of types of Aİ and the definitions of artificial intelligence reveals a notable gap between the capabilities of existing technologies and the traditional understanding of intelligence. The diverse definitions of “intelligence” emphasises attributes such as learning, understanding, reasoning, self awareness, and practical application of knowledge. However, when applied to current AI systems, especially weak or narrow AI, these definitions fall short.

Because the examination of AI types, including narrow AI, general AI, and super AI, underscores the prevailing limitations in achieving true artificial intelligence. While narrow AI excels in specific tasks, the theoretical concepts of general AI and superintelligence remain unrealized in current technological landscapes. And, the narrow systems are just a limited way of copying only some functions, but are not the reflection of real human intelligence.

The lack of a universally accepted definition for AI complicates the development of effective regulations. To address this, proposing a comprehensive definition that accommodates both strong and weak AI is essential. The suggested definition portrays AI as set of sciences, methods of cognition and creation of systems and machines existing on the basis of hardware, software or other forms, partially reproducing or copying some functions related to defining human intelligence in a limited way, through processing the input data, forming reasoning, interacting with the environment and learning, with the future possible ability to fully perform or surpass the intellectual functions of a human.

Despite the proposed expanded definition of artificial intelligence, the problem of the lack of a generally accepted definition still remains open. The lack of a unified legal approach to AI definitions may make it difficult to formulate effective regulations. Regulation requires clear definitions, and the lack of international agreement makes it difficult to create a global approach.

Therefore, we revised two possible scenarios that could help to solve this issue, the adoption of a unified concept and the creation of the regulatory system based on the functions of AI.

The adoption of a unified international vision or concept requires long-term efforts and cooperation at the global level. Regular international dialogue, the creation of working groups and the development of common standards contribute to the creation of international agreements and treaties to establish common principles and standards in the field of AI.

In the context of the functional approach, defining categories of AI based on its functions provides a flexible framework for regulation. This approach requires flexible legal language and emphasises the importance of international cooperation to create a harmonised framework, taking into account cultural and regional specificities.

In essence, formulating regulations based on AI functionalities, considering the current state of technology and anticipating future developments, offers a pragmatic pathway to address the diversity of definitions and create a regulatory framework conducive to responsible AI advancement.

REFERENCES

  1. Graham Oppy and David Dowe, “The Turing Test,” first published April 9, 2003, last revised October 4, 2021, The Stanford Encyclopedia of Philosophy, accessed [23.11.2023], https://plato.stanford.edu/entries/turing-test/
  2. McCalloch, W.S., Pitts, W. “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biology, vol. 52, no. 1/2 (1990): 99-115, accessed  [23.11.2023], https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf
  3. Selmer Bringsjord and Naveen Sundar Govindarajulu, “Artificial Intelligence,” first published July 12, 2018, The Stanford Encyclopedia of Philosophy, accessed [23.11.2023], https://plato.stanford.edu/entries/artificial-intelligence/#HistAI
  4. John Markoff, Homo Roboticus: People and Machines in Search of Understanding (Alpina Publisher, 2017).
  5. McCarthy J. What is Artificial Intelligence? Stanford University, accessed [23.11.2023], http://jmc.stanford.edu/articles/whatisai/whatisai.pdf
  6. Simon, H. A.; Newell, Allen (1958), “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research 6: 1, doi:10.1287/opre.6.1.1.
  7. Weizenbaum, Joseph (January 1966), “ELIZA — A Computer Program For the Study of Natural Language Communication Between Man And Machine”, Communications of the ACM 9 (1): 36–45, doi:10.1145/365153.365168
  8. Minsky strongly believes he was misquoted. See McCorduck 2004, pp. 272–274
  9. Moravec 1976. McCarthy has always disagreed with Moravec, back to their early days together at SAIL. He states “I would say that 50 years ago, the machine capability was much too small, but by 30 years ago, machine capability wasn’t the real problem.” in a CNET interview. (Skillings 2006)
  10. Rockwell Anyoha, “The History of Artificial Intelligence,” BLOG, SPECIAL EDITION ON ARTIFICIAL INTELLIGENCE, Can Machines Think?, August 28, 2017, accessed [03.12.2023], [https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/].
  11. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
  12. Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed in the Quest For Machines That Think, New York: Macmillan/SAMS, ISBN 978-0-9885937-1-8, OCLC 313139906
  13. Marr, D., 1982. Vision: A computational investigation into the human representation and processing of visual information, henry holt and co. Inc., New York, NY 2,Also Zhou Shao, Ruoyan Zhao, Sha Yuan, Ming Ding, Yongli Wang, “Tracing the evolution of AI in the past decade and forecasting the emerging trends,” Expert Systems with Applications, Volume 209, 15 December 2022, 118221. accessed [03.12.2023], [https://www.sciencedirect.com/science/article/am/pii/S0957417422013732].
  14. Brooks, R., 1986. A robust layered control system for a mobile robot.
  15. Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed in the Quest For Machines That Think, New York: Macmillan/SAMS, ISBN 978-0-9885937-1-8, OCLC 313139906
  16. Rockwell Anyoha, “The History of Artificial Intelligence,” BLOG, SPECIAL EDITION ON ARTIFICIAL INTELLIGENCE, Can Machines Think?, August 28, 2017, accessed [07.12.2023], [https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ ].
  17. Selmer Bringsjord and Naveen Sundar Govindarajulu, “Artificial Intelligence,” first published July 12, 2018, The Stanford Encyclopedia of Philosophy, accessed [08.12.2023], https://plato.stanford.edu/entries/artificial-intelligence/#HistAI
  18. Campbell, M., Hoane Jr, A.J., Hsu, F.h., 2002. Deep blue. Artificial intelligence 134, 57–83.
  19. Rockwell Anyoha, “The History of Artificial Intelligence,” BLOG, SPECIAL EDITION ON ARTIFICIAL INTELLIGENCE, Can Machines Think?, August 28, 2017, accessed [07.12.2023], [https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/].
  20. Zhou Shao, Ruoyan Zhao, Sha Yuan, Ming Ding, Yongli Wang, “Tracing the evolution of AI in the past decade and forecasting the emerging trends,” Expert Systems with Applications, Volume 209, 15 December 2022, 118221. accessed [07.12.2023], [https://www.sciencedirect.com/science/article/am/pii/S0957417422013732].
  21. Zhou Shao, Ruoyan Zhao, Sha Yuan, Ming Ding, Yongli Wang, “Tracing the evolution of AI in the past decade and forecasting the emerging trends,” Expert Systems with Applications, Volume 209, 15 December 2022, 118221. accessed [07.12.2023], [https://www.sciencedirect.com/science/article/am/pii/S0957417422013732]
  22. Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, “Microsoft DeBERTa surpasses human performance on the SuperGLUE benchmark,” Published January 6, 2021. accessed [07.12.2023], [https://www.microsoft.com/en-us/research/blog/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark/]
  23. “Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award,” Bengio, Hinton, and LeCun Ushered in Major Breakthroughs in Artificial Intelligence. accessed [07.12.2023], [https://awards.acm.org/about/2018-turing]
  24. The American Heritage® Dictionary of the English Language, Fifth Edition copyright ©2022 by HarperCollins Publishers. All rights reserved. accessed [10.12.2023], [https://www.ahdictionary.com/word/search.html?q=intelligence]
  25. All Words English Dictionary Inc. All Words.com – Copyright © 1998-2008 AllSites.com, LLC. accessed [10.12.2023], [https://www.allwords.com/copyright.php]
  26. Cambridge English Dictionary Cambridge University Press & Assessment 2023. accessed [10.12.2023], [https://dictionary.cambridge.org/dictionary/english/intelligence]
  27. Wikipedia contributors, “Intelligence,” Wikipedia, Wikimedia Foundation. Accessed [10.12.2023], [https://en.wikipedia.org/wiki/Intelligence]
  28. S. Legg and M. Hutter, A Collection of Definitions of Intelligence (2007), accessed [10.12.2023], [https://books.google.com/books?hl=en&lr=&id=t2G5srpFRhEC&oi=fnd&pg=PA17&dq=definition+of+intelligence&ots=hCVOqXSES5&sig= qasMYQ4hyEnO39p8araT9YC0RJk].  Binet, A., & Simon, T. (1905). Méthode nouvelle pour le diagnostic du niveau intellectuel des anormaux. L’Année Psychologique, 11, 191–244.
  29. Yvonne Hindes, Mike R. Schoenberg, & Donald H. Saklofske, “Intelligence,” in Encyclopedia of Clinical Neuropsychology (pp. 1329–1335), accessed [10.12.2023], [https://link.springer.com/referenceworkentry/10.1007/978-0-387-79948-3 1061#:~:text=For%20example%2C%20Binet%20].
  30. Cameron Cashman, “What is ChatGPT?” (February 27, 2023), accessed [10.12.2023], available at [https://www.hp.com/us-en/shop/tech-takes/what-is-chatgpt#:~:text=Despite%20its%20impressive%20abilities%2C%20ChatGPT,narrow%20or%20%E2%80%9C weak%E2%80%9D%20AI]
  31. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: OUP, 2014), 328 + xvi, accessed [10.12.2023], available at [https://academic.oup.com/pq/article-abstract/66/262/196/2460870?login=false].
  32. IBM Data and AI Team, “Understanding the Different Types of Artificial Intelligence,” October 12, 2023, accessed [10.12.2023], available at [https://www.ibm.com/blog/understanding-the-different-types-of-artificial-intelligence/].
  33. Luís Moniz Pereira, “Turing is among us,” Journal of Logic and Computation, Volume 22, Issue 6, December 2012, Pages 1257–1277, accessed on [12.12.2023], [https://academic.oup.com/logcom/article-abstract/22/6/1257/971698].
  34. Christopher Manning, “Artificial Intelligence Definitions,” September 2020, Stanford University HAI, accessed [12.12.2023], [https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf].
  35. Erwei Zhan, “Challenging the Intelligence of Systems: A Literature Review on How IT Deals with Reasoning and Action,” accessed [12.12.2023], [https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=765307b1798005d6a4d9fdbe9d01d561bd6da616#page=16].
  36. Nilsson N.J. Problem-Solving Methods in Artificial Intelligence, New York: McGraw-Hill, 1971. – xv; 255 p.
  37. Journal of Artificial General Intelligence, Volume 10 (2019) – Issue 2, Pages 1-37, Received: 29 Mar 2019, Accepted: 12 Aug 2019, DOI: https://doi.org/10.2478/jagi-2019-0002, © 2019 Pei Wang, accessed [12.12.2023], https://sciendo.com/article/10.2478/jagi-2019-0002].
  38. Council of Europe, “Glossary0”, accessed [12.12.2023], [https://www.coe.int/en/web/artificial-intelligence/glossary].
  39. Cambridge University Press & Assessment, Cambridge Academic Content Dictionary © 2023. Accessed December 12, 2023. [https://dictionary.cambridge.org/dictionary/english/ai?q=AI]
  40. The American Heritage® Dictionary of the English Language, Fifth Edition copyright ©2022 by Harper Collins Publishers. All rights reserved. accessed [12.12.2023], [https://www.ahdictionary.com/word/search.html?q=artificial+intelligence].

Article Statistics

Track views and downloads to measure the impact and reach of your article.

4

PDF Downloads

144 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

GET OUR MONTHLY NEWSLETTER

Subscribe to Our Newsletter

Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.

    Subscribe to Our Newsletter

    Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.