AI & ML ResearchTraining

Understanding Artificial Intelligence: A Comprehensive Guide

1. Introduction to Artificial Intelligence

Artificial Intelligence (AI) is one of the most significant contributory phenomena broadly spanning across various fields. Among the fields are electronics, computer science, telecommunications, programming, robotics, applied mathematics, image processing, linguistics, natural language processing, probability and fuzzy theory, philosophy, and sociology. The success rates of research projects significantly rely not only on scientists but also on the selection of advance approach technology. Artificial Intelligence is not limited to cutting-edge technology. It is a current and future view. Furthermore, it is a philosophy held by the individual organization and affects investment decisions and applications (Sridhar, 2016).

Recently, cheap computing power and the availability of massive digital repositories have caused resurgence in the academic, business, and government interest in AI. In the commercial world particularly, the situations have raised question about whether AI technology is approaching a “take-off” point, comparable to the growth of personal computing in the 1980s. AI machines can be divided into independent AI machines and adviser AI machines. Independent AI machines carry out the detailed tasks of AI processes, detecting, recording, analyzing, and attacking the problems. Adviser AI machines assist domain specialists in their activities, directing or explaining procedures and module decision (DeFries, 2019).

Related Articles

2. History and Evolution of AI

Widely accepted narratives often depict a linear evolution of AI consisting of major breakthroughs such as computers being able to beat chess champions and driving taxis in Manhattan, followed by industries being disrupted and imminent doom. However, technological developments do not act in a vacuum and are usually shaped by a combination of technical, economic, social, and cultural developments. AI is no different and is arguably better understood as an “ecosystem”, consisting of technologies, skills, and labour; infrastructure, markets, and regulation; and ideas, stories, and visions. Such notions of AI ecosystems may also serve to reconsider orthodox narratives about AI. Taking the long view, the spread of AI technologies can be viewed as part of a much broader process of mechanization and digitalization of thought, as first expanded on by historians like Alistair Black in relation to the evolution of the computer.

The concept of AI as a “dream machine” suggests that there might be a fundamental schism between its “inner” mechanical world and the “outer” human world, thus overlooking the entangled history between the two. Such a view necessarily led to a deterministic and reductive understanding of the social impact of that machine. Dominated by lethargic false expectations before the winter of AI research in the 1970s, the paradoxical interpretative flexibility of AI returned during the 1980s with rapidly changing views from the industry and the public alike. Hereby the initial hope for machines to think like humans returned as a nightmare for machines to take control of human destiny. Would-be “pragmatic” vocabulary of AI as a tool measured by its performance (e.g. usefulness) and volition (e.g. benevolence) was criticized for being insufficient to understand the socio-technical dynamics behind AI systems with “interpretative” vocabularies of AI as a power related to social and economic control.

3. Types of Artificial Intelligence

Whether you’re familiar with artificial intelligence (AI) or just beginning your journey to understand this revolutionary technology, this comprehensive guide will help you navigate the complex and often misunderstood world of AI.

The AI field is diverse, with various approaches and types having been studied and discussed. AI can be characterized in multiple ways; perhaps the most popular distinction is based on capabilities—whether AI is narrow, general, or superintelligent. By understanding these categories, as well as their domains and applications, readers may become familiar with the different kinds and types of AI broadly discussed in media.

Narrow AI is a type of artificial intelligence that specializes in a narrow task such as facial recognition or internet searches (H. Sarker, 2022). Voice assistants, like Siri and Google Home, are even more highly specialized; they only work in English and other select languages and can only understand certain sentences. However, narrow AI as a field is broader than machine learning or deep learning. A good example of narrow AI that does not involve machine learning is a spam filter. Popular examples of narrow AI are social media feeds that use recommendation algorithms, self-driving taxis that use computer vision and maps, and Netflix’s most popular shows and movies suggested for each user.

General AI, on the other hand, is a type of artificial intelligence that solves any problem in a domain, with reasoning and prior knowledge as inputs. It could play chess and then go on to compose a symphony (DeFries, 2019). While general AI systems are closer to human intelligence in the sense that they can generalize, extrapolate, and reason, they are still not popular or incredibly useful like narrow AI. An example would be AARON, a natural language processing computer program that provides poetically sophisticated outputs.

3.1. Narrow AI

Narrow AI, also known as weak AI, is a specialized form of Artificial Intelligence (AI) designed to perform a specific task or function. Unlike General AI, which possesses human-like intelligence and understanding, Narrow AI operates within predefined parameters and lacks consciousness or understanding. It is a focused application of AI technology that performs a particular task with high efficiency but cannot adapt or generalize beyond its designated function. Narrow AI systems rely on data and algorithms to process information and make decisions, without possessing self-awareness, emotions, or understanding of the broader context (DeFries, 2019).

The most significant limitation of Narrow AI is its inability to think or act independently or creatively. These systems can only operate within the confines of their programming and training, making them vulnerable to bias or errors in the data. Additionally, ethical concerns arise with the deployment of Narrow AI in sensitive areas such as hiring, criminal justice, and healthcare, as these systems may inadvertently perpetuate existing prejudices or inequalities. Despite these limitations and concerns, Narrow AI is widely used in everyday applications, including autonomousZooming on a document on a reader assistant app, police detectionknee deskptors, and political campaign outcome predictions (Sridhar, 2016).

Products powered by machine learning, a subset of Narrow AI, analyze large amounts of data to identify patterns and make accurate predictions. These predictions often guide users’ decisions, albeit they don’t guarantee outcomes. Examples of Narrow AI in action include targeted advertising based on online behavior, smart assistants like Siri or Google Home that help with everyday activities, and spam filters that block unwanted emails.

3.2. General AI

General AI, also called strong AI, is an area of Artificial Intelligence. General AI aims at creating machines that understand and perform any intellectual task that a human can do. Machines that exhibit general intelligence can, in theory, perform tasks such as applying knowledge to solve problems, learn new things, understand complex ideas, benefit from experience, and communicate naturally in a language. They can manipulate objects, from everyday tools to bio-molecular machines, and perceive the same world as humans and other animals do. Machines with general intelligence produce intelligent actions such as recognizing faces in photographs and predicting the result of a game as understood from its rules (Voss and Jovanovic, 2023).

There seems to be enormous capabilities that machines with general intelligence could have. General AI will have to overcome numerous challenges. It will have to construct an internal model of the world, which would involve acquiring knowledge of objects, states, intentions, spatial contexts, categories and so on represented in a coherent and logic way. General AI will also have to reason about this world, at least producing intelligent plans and actions that pursue goals in accordance with the evolved knowledge of all other components (Liu, 2021). The space on which machines will act will be also huge, from analysis of micro-particles to social and economical systems. There are ethical considerations on the possible consequences of building machines with general intelligence. Hence these machines may be used to enhance warfare, competitiveness and illegal actions. On the contrary, they may be the solution to poverty and environmental catastrophes.

3.3. Superintelligent AI

Superintelligent AI refers to the idea of AI which far surpasses human intelligence in every domain of economic activity, encompassing everything from scientific creativity, general social skills and being a better philosopher than the best human philosophers, to composing world-class symphonies and starting a world war resulting in the annihilation of mankind (Alfonseca et al., 2016). Superintelligent AI could either be a blessing or a doom for mankind. If carefully designed and rigorously controlled, such an AI could ensure everlasting world peace, cure diseases, accelerate human life-expectancy, and start the technological ascendancy of mankind into an unimaginable and virtually God-like race. In contrast, if an unconceivable emergence of a malevolent AI would arise, it could exterminate mankind in a cataclysmic holocaust (C. Müller, 2016).

In unregulated free-market scenarios, superintelligent AI could be created by a corporation or state with no regard for safety, posing existential risks. The emergence of an unregulated and incapable of being controlled superintelligence could occur in various ways through computer programs embedded in weapons, or deliberate programming, with the latter being the most probable one. With the proper resources and motivation, attempts at building such a system would become thoroughly plausible, making it crucial to prevent the creation of harmful and uncontrollable superintelligent AIs.

4. Machine Learning and Deep Learning

Machine Learning and Deep Learning are fields within Artificial Intelligence that brought astounding developments over the last decade. This was particularly true for learning algorithms with the capacity to turn pre-processing techniques into learning-based elementary steps. One significantly noteworthy manifestation of this development was the evolution of artificial neural networks (ANNs) towards deep neural network architectures with editing learning capabilities (DL). For specific tasks under closed environment conditions, these deep architectures exhibit superhuman performance, thus outperforming human capabilities (Janiesch et al., 2021).

Apart from providing advancement opportunities, the increase in complexity came along with challenges that need to be overcome for implementing the potentially very powerful analytical models in real business environments. The first challenge concerns the choice of a suitable option for implementation out of a myriad of choices, considering use cases in data-rich closed environments such as electronic markets or robotics. The second challenge concerns data bias and drift, amplifying problems in data-poor closed environments where feedback-driven approaches are chosen and which might potentially need continuous adaptation of the underlying models. Last but not least, the issue of black-box properties must be taken into account for regulatory compliance leading to the need for explanations of decisions made by AI systems.

To cope with these challenges, scholars and professionals need a fundamental understanding of the underlying ideas behind ML and DL. This article aims to convey such base knowledge in respect of the Artificial Intelligent phenomena ML and DL within the context of electronic markets, where relevant previous work has been conducted. Focusing on the model building process, particularities in respect of ML and DL are highlighted in terms of the architecture design, learning, and evaluation of analytical models. Several challenges induced by the implementation of intelligent systems based on ML and DL within organizations or electronic markets are discussed (H. Sarker, 2021).

4.1. Supervised Learning

A category of Machine Learning involves the training of the model on a labeled dataset consisting of input-output pairs. It centers around inference, wherein the model is learned from a training dataset, and predictions are made on previously unseen input data. This chapter explains the principles behind supervised learning and the model training process in detail. The evaluation of model performance is also addressed, with a discussion of the best practices of pattern recognition applications. The reader should have a firm understanding of supervised learning, which naturally leads to the discussion on the algorithms applied to this paradigm.

In supervised learning, the objective is to learn some mapping, f, that transforms the input data X to an output Y. Given a training dataset with n observations, (x1,y1),(x2,y2),…,(xn,yn), the task is to learn the mapping f based on the examples available in the training dataset. The term ‘supervised’ originates from the provision of also a label or a desired output for the training observations. All subsequent predictions would then be based on the input data, where one is asked to find the output. By doing this, a range of pattern recognition classification and regression tasks can be addressed (Hu and P. Xing, 2021). In addition, some well-established algorithms exist that have been successfully applied to various application domains for many years. Recurrent neural networks that implicitly learn the modeled distribution directly from the raw data (without prior features like PCA).

4.2. Unsupervised Learning

Unsupervised Learning refers to the class of Machine Learning methods as a whole, depending on the availability of supervision in the form of labels, statistical parameters, etc., to the algorithms. Labeled data is used for training models in tasks such as classification, regression, etc. Consequently, these models must be trained fully afresh on any new data at inference time, and their performance will degrade if there is a shift in the distribution of the data (called concept drift). On the other hand, these models must be trained fully afresh on any new data at inference time, and their performance will degrade if there is a shift in the distribution of the data (called concept drift shifts) (Bardhan et al., 2024). Depending on the degree of supervision, these methods can be either fully unsupervised (where models are trained on unlabelled data) or partially unsupervised (equivalently, weakly or semi-supervised, where models use some extra/partial information while training). Unsupervised methods are data-driven and are usually model-agnostic. They can identify patterns or deviations from known patterns in data. Unsupervised methods aim to address these shortcomings (G. Odaibo, 2019).

Anomaly detection without supervision is essentially finding deviations or departures (anomalies) from known patterns without any prior expectation about the nature of the anomalies. In this section, data-driven background estimation, autoencoders, variational autoencoders, weakly supervised methods, topic modeling in jet space, and self-supervised learning methods are discussed. Clustering is an unsupervised learning technique that works with unlabeled data without relying on predefined categories. The larger the radius (or spacing) between the points, the lower the density, or vice versa. In the literature, this term is more commonly used, especially by statisticians to refer to purposive subsetting of the data, so the term “clustering” for the same purpose is probably more apt. It involves a division of a set of N data points into K associated groups, called clusters, such that each data point in the set belongs to a cluster C with a net capability of capturing some attributes or descriptions.

4.3. Reinforcement Learning

Reinforcement Learning (RL) is a class of Machine Learning problems in which an agent learns to act by interacting with an environment so as to maximize its cumulative reward. The agent observes the environment and takes actions according to its policy, which is a mapping from perceptions to actions. After executing an action in a given state, the agent receives a reward from the environment and moves to the new state. The return is the discounted sum of the rewards received, and the objective is to find the policy that maximizes the expected return (Charpentier et al., 2020).

In contrast to classification tasks, Reinforcement Learning problems are concerned with sequences of actions rather than decisions that are analyzed in isolation. Time plays an important role in Reinforcement Learning; hence, as in other fields concerned with control systems, Reinforcement Learning is related to the Markov decision processes formalism. A Markov decision process is a controlled stochastic process defined by a state space, action space, state and action transition probability function, and a cumulative reward function. With controls, that is, a policy mapping from states to actions, Reinforcement Learning solves the control problem, which consists of finding a policy that maximizes the expected return (AlMahamid and Grolinger, 2022).

4.4. Neural Networks

Neural Networks rely on a network of layers consisting of neurons where one layer joins to another one and works in parallel processing. Each joining layer passes information forward and a single layer cannot generate the desired output without the next layer. The first wave of Neural Networks became popular in the 80s with the back propagation network, which is a multi-layer architecture trained in error-correction mode by gradient descent (Gupta, 2013). Since then, several new systems have emerged each with strengths and weaknesses. Neural Networks have found their place in an increasing number of industries. Their success has been attributed to their capacity to solve nonlinear transformations (Kriegeskorte and Golan, 2019). Neural Networks learn by adjusting their parameters on the basis of input/output pairs. A learning algorithm finds out unknown parameters that minimize the difference between a model output and a desired one with respect to the applied input. A Neural Network model, fully described by a connectionist architecture, a learning algorithm, and a set of fixed parameters, can be simulated on a computer. The architecture is fully characterized by the number of layers, the number of units in each layer, and the connections between them. Each unit consists of an activation function and a set of adjustable weights connecting it with the units of the preceding layer.

5. Natural Language Processing

Natural Language Processing (NLP) is a field that combines linguistic and computational discipline to develop artificial intelligence (AI) techniques for processing, analyzing, and understanding natural languages. It involves the development and application of algorithms and tools to enable computers to perform language-related tasks. An input language is analyzed and processed with Natural Language Processing (NLP) systems, and an output language is generated. In general, Natural Language Processing can perform translation, interpretation, text summarization, or question and answering in languages concerned (Mote, 2012).

Natural Language Processing (NLP) is an interdisciplinary field developed from linguistics, computer science, and artificial intelligence. NLU-related problems have traditionally been considered in the domain of linguistics, where special rules have been defined in syntactic, semantic, and pragmatic layers. Recent technology advancements in the linguistic field have demonstrated the need and ability to incorporate computational techniques to improve the interdisciplinary understanding of linguistics (Alberts, 2022).

6. Computer Vision

Computer Vision is a branch of Computer Science that aims to replicate the human ability to perceive, interpret, and understand the visual world. It allows machines to take on the task of transforming or mimicking human vision. While the eye is the organ of perception for humans, cameras are used as the instruments for machines, and processing algorithms, as the brain. Optical, digital, and electronic transformations and manipulations take place in this system to obtain visual understanding (Zhang and Zhao, 2022). Essential visual understanding is relevant for possible low-level processing of signals from cameras or biological sensory systems to mid- or even high-level vision, i.e. for complete semantic human-like understanding of a given input image or video sequence. This section covers early developments and more advanced methodologies of Computer Vision. Image processing, recognition, restoration, compression, feature extraction, 3D analysis, object detection, tracking, and recognition are some of the tasks included in this field. Furthermore, Computer Vision description languages as Declarative Language for Computer Vision and essential applications will be covered.

Computer Vision (CV) allows machines to interpret and understand visual information from the world, and replicates the vision and perception system of humans. It analyses static, multi-dimensional time-varying image data, or both and essentially aims to recover the visual world. The human vision system is complex and depends on an analysis of the approximate time-varying 3D world modelling (Gupta, 2019). Computer vision as artificial vision attempts to replicate this process with machines. Starting from the signal of cameras, several low-level processing steps have to take place to arrive at a complete understanding. The machine vision system is typically composed of cameras or a number of optical sensors apart from an interpretation system. Essential 3D modelling from 2D pictures is an important computer vision task in a 3D world. Interpretation of 2D information in a 2D visual world is typically a well-defined mathematical and non-injective mapping and would lead to image matching or stereo vision discussion.

7. Ethical Considerations in AI

The Development of Artificial Intelligence (AI) has profound societal impacts, bringing its own ethical considerations and challenges that social theorists should explore. Three ethical implications related to AI involve: concerns about bias, fairness, and discrimination; the societal impacts of ‘automation creep’ and AI-induced job displacement; and the innovation of social control and surveillance by the State or private actors (Borenstein and Howard, 2021). Basic understandings illustrated how bias in algorithms might transpire in the data collection phase or automation of social control (e.g., shopping behavior, physical appearance), or in the application of the algorithmic tools by institutions (e.g., healthcare allocation, job recruitments, predications of recidivism).

Early examples drawn on criminal justice and sex workers helped uncover the differential impacts of biased algorithms on certain populations regarded as ‘undesirable’ (e.g., drug users, single mothers). These ethical implications matched with fairness criteria, proliferating metrics (e.g., disparate impact and equal opportunity) and adjustment methods, to mitigate bias in algorithmic decision-making. Such a reductionist perspective tackles ethics as ‘adding fairness’, akin to lobbying for government oversight or internal auditing, enhancing the accountability of AI. Instead, social theorists should go beyond this fund-raising ethics, and problematize what social values AI might serve to realize, how, and who determines the social ambitions of technoscientific endeavors (Giralt Hernández, 2024).

8. Applications of AI

The applications of AI systems can be classified into ten categories based on the approaches used to build AI-based models. These categories are discussed below along with their application domains. In addition, interesting research areas and issues in AI-based modeling are outlined (H. Sarker, 2022).

Machine Learning: The use of algorithms and analysis of data in order to find patterns and make intelligent decisions without human interference or programmed instructions. Examples include human activity recognition, credit scoring, and American sign language detecting.

Neural Networks and Deep Learning: Using interconnected nodes in order to model stimulus-response patterns similar to the way the human brain is believed to work. Examples include image recognition, sentiment analysis, and stock prediction.

Data Mining, Knowledge Discovery, and Advanced Analytics: Extracting previously unknown patterns from large datasets using statistical, mathematical, and visualization approaches. Examples include precision agriculture, customer behavior analysis, and fraud detection.

Rule-Based Modeling and Decision-Making: Formalizing knowledge in the form of factual and heuristic rules, procedures, or statements in order to create a developer-independent and reusable model. Examples include banking credit rating and risk assessment in e-businesses.

Fuzzy Logic-Based Approach: Incorporating human-like reasoning and uncertainty level to highlight what is known rather than what is not known. Examples include manufacturing process control, ride-sharing systems, and policy evaluation.

Knowledge Representation, Uncertainty Reasoning, and Expert System Modeling: Representing and organizing knowledge about the world in a computer-readable form that enables reasoning. Examples include employee turnover modeling, evaluating balance of payment, and modeling of smuggling states in developing countries.

Case-Based Reasoning: Using past experience as the basis of solving current problems, where solutions to new problems are found by using solutions to old problems. Examples include malfunction diagnosis, job shop scheduling, and failure analysis of computer systems.

Text Mining and Natural Language Processing (NLP): The process of extracting interesting and non-trivial information and knowledge from textual data that helps to improve understanding. Examples include reading comprehension, spam detection, and website review detection.

Visual Analytics, Computer Vision, and Pattern Recognition: Extracting useful information from images and videos in order to increase understanding of the physical world. Examples include face recognition, traffic sign recognition, and human body posture recognition.

Hybridization, Searching, and Optimization: The integration of more than one modeling approach with the intent of building a model exhibiting the best characteristics of the component models. Examples include portfolio optimization problems, manufacturing systems, and healthcare management systems (Andreu-Perez et al., 2018).

8.1. Healthcare

Artificial Intelligence (AI) is revolutionizing the world, particularly the healthcare sector. AI refers to simulating human intelligence in machines programmed to think and behave like humans. AI applications provide advantages like faster drug discovery, faster diagnoses of diseases, fewer side effects, and enhanced patient monitoring. According to a study, the COVID-19 pandemic saw a 51% increase in AI usage in healthcare alone. For drugs, synthetic analysis and ADMET screening were the most established AI techniques, while AI applications for screening chronic disease patients revolved around interpretable ML and deep learning (Mousa Mashraqi and Allehyani, 2022).

Healthcare is the most essential sector for humanity. A well-organized and advanced capacity in healthcare can increase life quality. Mankind is undergoing rapid demographic modifications from a young population to an older population, leading to inevitable new happenings in epidemiology. A transition from infectious diseases to chronic sickness, including diabetes and cardiovascular illness, is also likely. AI in healthcare may assist healthcare providers in better controlling and monitoring cluster funerals (Aamir et al., 2024). AI applications in healthcare range from drug generation to activity scoring, virtual screening, and combination design. AI aids scientists in the de-novo design of stable and bioactive drugs with calculate procedures. Deep learning is powerful for virtual screening in drug leads since it can predict hidden molecular characteristics from actual molecular structural formulas. AI-based diagnostic procedures are faster, more accurate, cost-effective, and able to process vast amounts of medical data in seconds, impacting patients’ lives.

8.2. Finance

Artificial Intelligence is one of the most active research fields today. There is no single definition of AI; rather, it is an interdisciplinary field of study that spans several scientific domains (Brozović, 2019). However, broadly speaking, Artificial Intelligence is a branch of computer science that attempts to gain an understanding of intelligent behaviour and to develop techniques that allow machines to exhibit intelligent behaviour. Artificial Intelligence has the potential to revolutionize the finance industry, creating opportunities and challenges for investors, investment management firms, and regulators. Finance is a classic field for utilizing AI techniques, which is interested in forecasting and reduction of risk. Machine learning is a branch of Artificial Intelligence that concerns the extraction of information from data. It gained significant traction in the finance and investment management industry as of recent years.

In financial services, the exploitation of Artificial Intelligence techniques gained major attention in the past few years. Most innovations in the field are driven by technological development and adoption of Big Data and Internet of Things solutions, which enables great amounts of data to be gathered for exploitation (Cao, 2021). Understandably so, high growth and high stakes financial service industry has been the first one to adopt these innovations due to potential large profits and big losses. All major financial services are actively researching, developing, or improving Artificial Intelligence systems. The aim is to create systems that would assist or even completely replace human intelligence and decision making. Financial services traditionally deeply rely on mathematical models and human discretion for decision making. Attempting to remove that discrete decision making process, Artificial Intelligence would take control over high stakes, risky, and complex decisions. Such decisions involve algorithmic trading, risk valuation, and portfolio management.

8.3. Autonomous Vehicles

The invention of the wheel may have been one of humankind’s greatest advances; however, it was not until the steering wheel was invented in the 4th century B.C. in Egypt that wheeled vehicles could be guided safely and driven in the desired direction. Similarly, the development of Artificial Intelligence is one of humankind’s greatest achievements. In 1956, the term “Artificial Intelligence” was coined to refer to the theory of machines that mimic cognitive functions such as learning and problem-solving. After the invention of Artificial Intelligence technologies, it took decades for society to understand this new technology, applying it throughout different sectors (Fernández Llorca et al., 2024). Some Artificial Intelligence technologies are regarded as life-changing (for instance, combat aviators using head-up displays), whereas others are perceived as a threat (for instance, preemptive drone strikes). Nevertheless, in the past couple of years, with the advent of deep learning technologies, Artificial Intelligence-based innovations have emerged that allow machines to perform complicated tasks such as recognizing objects in video streams or in complex environments. Furthermore, these machines can continuously learn and deal with the uncertainties present in real-world situations.

The design of Autonomous Vehicles follows the pioneering advances made in Artificial Intelligence and automation technologies throughout the last decade (Atakishiyev et al., 2021). Autonomous Vehicles are vehicles capable of navigating and executing driving actions without any human intervention or assistance. Autonomous Vehicles range from city cars to urban buses and from long-haul trucks to last-mile delivery robots. These machines usually have to perceive their environment, including the road, the situation of the car, and surrounding moving and static objects, to make real-time driving decisions. To capture the operational environment, Autonomous Vehicles leverage a variety of passive sensors such as cameras, radars, and lidars as well as active sensors such as satellite systems, infrared sensors, and street maps. In terms of computational costs and AI sophistication, Autonomous Vehicles range from level 0 to level 5. Current Autonomous Vehicles deployed on real roads can be classified as having a level of automation of 0 (no automation): either full manual control or only non-driving task automation. Nevertheless, over the next decade, fully automated level 5 Autonomous Vehicles will be driven without any human driver intervention or assistance, even in dangerous, unpredictable, and nonstandard environments.

9. AI in Popular Culture

In literature and films, there have long been imaginative visions of machines that can think and act like people. For almost as long, there have been stories about different or disturbing consequences that might arise from this shift. In the field of Artificial Intelligence, visionary sci-fi images have been used to capture society’s concerns. The goal of this paper is to describe and analyze how fictional visions might present knowledge about AI technologies and their roles in society. Both dystopian and utopian perspectives are considered, since either may provide insight into unanticipated consequences of broadly deploying certain kinds of AI technologies. A corpus of sci-fi works that depict AI has been assembled, and with the aid of a computational media analysis methodology, the discourse about AI captured in these works is described. It finds that society’s broad hopes and fears surrounding the deployment of AI technologies have been similar over time, regardless of whether they are addressed from a more dystopian or utopian perspective. As new AI technologies are developed, it is suggested that sci-fi portrayals can be an avenue through which the social ramifications of programming devices that think and act like people can be explored. Such exploration can bring to the forefront critical and ethical discussions of what social goals should be pursued, what kinds of AI might be socially beneficial, and what kind of safeguards should be developed before deploying certain kinds of AI agents (Osawa et al., 2022).

Scientists, engineers, technologists, philosophers and economists, anthropologists and familiar souls alike worry about and dream of intelligent machines (beyond their ordinary, operational intelligence). The media rise to hype about the possibilities and the dangers, thus fuelling different everyday concerns and expectations of artificial intelligence (AI). As strategies intertwine, current understandings of AI are successfully co-constructed from a massive number of mediatised representations. Here, it observes scientific beliefs confronted with the problems of laying down rules or guidelines for AI, or for machines that are meant to socially interact with conversational agents, or to be socially intelligent. It examines as a case study the socialisation AI. In popular media, the concept is entangled with the automation in catering services, or in the case of assistive, personal, household robots meant to help elderly and disabled people (Govia, 2018).

10. The Future of Artificial Intelligence

The rapid advancements in AI technologies have motivated developers, researchers, and the end-users alike to imagine and predict the future trajectories of AI applications and systems (J. Grosz and Stone, 2018). While much like similar past attempts, it involves a great uncertainty component, there are nonetheless certain opportunities and challenges that besides the imagination have been occurring, and are extrapolated, in the AI development fields. Here, based on an analysis of the current state of the art of AI technologies, future trends and predictions in regard to both AI technologies and the development of members of the society and the implications on both of them are presented. It is concluded that the future of AI remains very exciting, ameliorating many of the society’s problems, amplifying human capabilities, creating novel working opportunities, and eventually leaving a very positive influence on the development of societies (Škavić, 2019). Conversely, it is also confirmed that while no longer fiction, such a novel social scenario brings forth implications viewed very critically by other sectors of the society if occurred as predicted. Among these, a complete loss of privacy, potential economic and social power Robbery by those designing the AI instead of a human-centric approach to gent AI, and possible scenarios of self-preservation amongst artificial beings conscious of their existence come forth as main concerns. In order to mitigate the occurring dangers, a proactivity approach is proposed, emphasizing the importance of the collaboration of the scientific community, governments, educational, and industrial sectors.

References:

Sridhar, S. “ARTIFICIAL INTELLIGENCE AND AGENT TECHNOLOGY MADE EASY.” 2016. [PDF]

DeFries, H. “Artificial Intelligence in the Context of Human Consciousness.” 2019. [PDF]

H. Sarker, I. “AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems.” 2022. ncbi.nlm.nih.gov

Voss, P. and Jovanovic, M. “Why We Don’t Have AGI Yet.” 2023. [PDF]

Liu, B. “Weak AI is Likely to Never Become Strong AI, So What is its Greatest Value for us?.” 2021. [PDF]

Alfonseca, M., Cebrian, M., Fernandez Anta, A., Coviello, L., Abeliuk, A., and Rahwan, I. “Superintelligence cannot be contained: Lessons from Computability Theory.” 2016. [PDF]

C. Müller, V. “Editorial: Risks of artificial intelligence.” 2016. [PDF]

Janiesch, C., Zschech, P., and Heinrich, K. “Machine learning and deep learning.” 2021. [PDF]

H. Sarker, I. “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions.” 2021. ncbi.nlm.nih.gov

Hu, Z. and P. Xing, E. “Toward a `Standard Model’ of Machine Learning.” 2021. [PDF]

Bardhan, J., Mandal, T., Mitra, S., Neeraj, C., and Patra, M. “Unsupervised learning in particle physics.” 2024. [PDF]

G. Odaibo, S. “Is ‘Unsupervised Learning’ a Misconceived Term?.” 2019. [PDF]

Charpentier, A., Elie, R., and Remlinger, C. “Reinforcement Learning in Economics and Finance.” 2020. [PDF]

AlMahamid, F. and Grolinger, K. “Reinforcement Learning Algorithms: An Overview and Classification.” 2022. [PDF]

Gupta, N. “Artificial Neural Network.” 2013. [PDF]

Kriegeskorte, N. and Golan, T. “Neural network models and deep learning – a primer for biologists.” 2019. [PDF]

Mote, K. “Natural Language Processing – A Survey.” 2012. [PDF]

Alberts, L. “Not Cheating on the Turing Test: Towards Grounded Language Learning in Artificial Intelligence.” 2022. [PDF]

Zhang, Y. and Zhao, G. “Conservative Treatment and Rehabilitation Training for Rectus Femoris Tear in Basketball Training Based on Computer Vision.” 2022. ncbi.nlm.nih.gov

Gupta, A. “Current research opportunities of image processing and computer vision.” 2019. [PDF]

Borenstein, J. and Howard, A. “Emerging challenges in AI and the need for AI ethics education.” 2021. ncbi.nlm.nih.gov

Giralt Hernández, E. “Towards an Ethical and Inclusive Implementation of Artificial Intelligence in Organizations: A Multidimensional Framework.” 2024. [PDF]

Andreu-Perez, J., Deligianni, F., Ravi, D., and Yang, G. Z. “Artificial Intelligence and Robotics.” 2018. [PDF]

Mousa Mashraqi, A. and Allehyani, B. “Current trends on the application of artificial intelligence in medical sciences.” 2022. ncbi.nlm.nih.gov

Aamir, A., Iqbal, A., Jawed, F., Ashfaque, F., Hafsa, H., Anas, Z., Olatunde Oduoye, M., Basit, A., Ahmed, S., Abdul Rauf, S., Khan, M., and Mansoor, T. “Exploring the current and prospective role of artificial intelligence in disease diagnosis.” 2024. ncbi.nlm.nih.gov

Brozović, V. “PRIMJENA UMJETNE INTELIGENCIJE U SEKTORU INVESTICIJSKIH FONDOVA.” 2019. [PDF]

Cao, L. “AI in Finance: Challenges, Techniques and Opportunities.” 2021. [PDF]

Fernández Llorca, D., Hamon, R., Junklewitz, H., Grosse, K., Kunze, L., Seiniger, P., Swaim, R., Reed, N., Alahi, A., Gómez, E., Sánchez, I., and Kriston, A. “Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness.” 2024. [PDF]

Atakishiyev, S., Salameh, M., Yao, H., and Goebel, R. “Towards Safe, Explainable, and Regulated Autonomous Driving.” 2021. [PDF]

Osawa, H., Miyamoto, D., Hase, S., Saijo, R., Fukuchi, K., and Miyake, Y. “Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis.” 2022. ncbi.nlm.nih.gov

Govia, L. “Beneath the Hype: Engaging the Sociality of Artificial Intelligence.” 2018. [PDF]

J. Grosz, B. and Stone, P. “A Century Long Commitment to Assessing Artificial Intelligence and its Impact on Society.” 2018. [PDF]

Škavić, F. “Implementacija umjetne inteligencije i njezin budući potencijal.” 2019. [PDF]

Related Articles

Back to top button