AI & ML ResearchEthics in AI

The AI Revolution: What Awaits Us?

1. Introduction

Every so often, there comes a technological revolution that shakes humanity to its core. It almost always marks the beginning of a new era, and rapid and unapologetic change invariably follows in its path. We saw it in the early 19th century with the advent of the Industrial Revolution, the effects of which are still shaping our world today. Fast forward two centuries and we are on the cusp of another such era-defining event with the rise of artificial intelligence (AI). However, the rapidity and transformative nature of the new revolution mean proliferating disruptions at a scale humanity has never seen. There is no doubt that great things await us in the impending AI era. But are we ready for what’s coming?

The article we posit is divided into two parts. The first part begins with a brief overview of AI: what it is, what it is not, and why it is important. This is followed by a more detailed discussion about the current state of AI and its associated main research areas. We pinpoint its progress, the factors contributing to it, and challenges in AI. The first part is concluded with a brief exposition of the three biggest updates the field has seen recently. The second part commences with an overview of the dark and bright sides of AI, i.e., its capabilities and the many issues it provokes regarding safety, privacy, labor markets, and its impact on society. The second part is concluded with the future of AI in a number of application domains. While we pay particular attention to the advancements to come in robotics and alternative energy, we also reflect on trends and transformations in medicine, agriculture, law and security, and AI’s role as a tool. We also mention the outlook for the field of AI.

Related Articles

2. Historical Context of AI

We’ve been hearing plenty of noise about artificial intelligence (AI) since the middle of the 20th century. A number of famous people in science and other fields have been intrigued by the possibility of creating an intellect. Undoubtedly, one of the main reasons for that has to do with the breadth of difficulties it poses. Research in this field also inspired the creation of many important computer efficiencies. This is a very rough temporal overview, reflecting our very rough attitude towards AI. That attitude has changed as the years have gone by. When digital computers became commercially available, researchers were quick to believe that they could invent intellect in computers. To put it differently, the characteristic software approach is to employ algorithms to resolve issues based on human activities such as instinct, trial and fallacy, and judgment.

They need to carve AI’s progress in history and anticipate what awaits us in the near future. The curiosity of philosophers for intellect then gives rise to psychology, which deals with intelligence as a dilemma between stimuli and immediate behavior. In the late 60s, the experimental manipulations and behaviors of the robots, created by Gordon Frazer, thwarted AI researchers, who could tell leaps and jumps on the scales of adulthood as they passed at the time. At the Miyazaki robotic lab, among other things, research was ongoing. Evolutionary logic programming, which mimics the system’s normal evolution, was part of this. The Japanese have purportedly succeeded in implanting an evolutionary system into a computer, allowing for the working out of course of acceptance for the direction of a robot. Alcove is an artificial intelligence platform that permits for the broad utilization of time, business, and database queries. It is also the name of a software firm that provides this platform.

3. Types of Artificial Intelligence

AI stands for artificial intelligence. This is when a computer or machine is made to learn, problem solve, and adapt like a human; and it is based on human intelligence. There are three different types of AI. These are:

Narrow AI (ANI – Artificial Narrow Intelligence): This is sometimes referred to as weak AI. Here, we have programmed an AI to do a single task and the AI is very good at doing that task. Here are a few examples of narrow AI in everyday life: When you use a GPS and it tells you what routes to take and turn to make for the shortest travel time, When Netflix suggests a show you might like, When Facebook recognizes your face.

General AI (AGI- Artificial General Intelligence): This is often referred to as strong AI, full AI, or broad AI. This is when an AI becomes smarter than a human at a lot of different tasks. This is still theoretical.

Superintelligent AI (ASI- Artificial Superintelligence): This is where an AI becomes more intelligent than there are humans put together. This is still theoretical but a popular topic with fiction, films, and TV.

Even though ANI exists, it is still a very small type of AI and we are still learning how to perfect this AI.

The first type of AI invented was the narrow AI and it is the only type of artificial intelligence that we see around us today. Lots of researchers think we will see AGI in the future. Let’s say we had an AI. It would be very difficult to program, as human intelligence is very complex “because as a ‘general’ intelligence it could be taught or learn to do something by looking at one or a few examples”. If humans wanted to create one, we would have to design and research a versatile learning algorithm that can learn about many situations or concepts. Researchers all over the planet are studying how to make AGI. However, they are also discussing the social or economical benefits and disadvantages. For starters, workers might be replaced if consumers buy cheaper goods that have been produced by cheaper smart machines, and things like that.

3.1. Narrow AI

As the most commonly encountered form of AI, narrow AI is essentially custom-built to perform one single task or many tasks within a specific domain. For example, a weather app on a phone can combine various AI algorithms to provide an hour-by-hour update of the expected local weather, using metrics obtained from localized sensor readings. Narrow AI systems can perform tasks like identifying patterns in data, generating recommendations in response to particular types of input or queries, recognizing speech patterns, and understanding the spoken words enough to transcribe it in real time or speak a different language.

In conclusion, narrow AI is pragmatic and suited to various types of narrow tasks. It is constrained by the quantity and quality of the data available directly or indirectly, as well as the actions it can take. In this fashion, narrow AI encompasses not just the complete realm of modern AI analytics systems (given the right data inputs, most of these systems can provide effective predictions), but also most of the tools developed under the umbrella of the IoT. This connection can roughly be understood under the usage of sensors that estimate different scenarios specific to the environment it’s in.

3.2. General AI

After we talked about narrow or weak AI in the previous subsection, is it time for us to discuss artificial general intelligence. What is this? It refers to building a computer that can manage any intellectual undertaking equivalent to that of a human even though the term was first widely used when the field was getting started. It’s a type of AI dreams of and works towards, it’s been much less time. Even if these systems are part of a narrow task they have confronted, today’s AI questions such as IBM’s AI. But we do need to work towards general AI as the subfields of computationalism and technology-savvy of research problems with broader thinking into this discussion.

A superintelligent general AI system is supposed to manage all the cognitive, physical and other human-like tasks we know and, as we are learning more and more, that’s a ridiculously complicated job. Nonetheless it’s only an alternative AI research route. If we can figure out how the human brain learns and works, we can recreate a machine that controls in and out almost anything the human brain has learned to use and somehow.

You will notice some problems, if you end up trying to create general AI. To begin with, as Searle talk over, constructing general AI is tremendously difficult. Second, it is able to trigger general AI’s potential results. As shown earlier, when we pinpoint certain improvements of a person, normally advanced technologies become associated with the “bad.” This doesn’t necessarily mean that all developments or innovations with wider risk implications are delicate, but they’re uprisings.

3.3. Superintelligent AI

3.3. Superintelligent AI. Superintelligence implies that AI systems can surpass humans in almost every domain – not just expertise related to their function, but also creative and apprehensive abilities. In particular, the term “superintelligence” designates intellectual capacities that are several orders of magnitude higher than that of humans. While, as of yet, we are not even capable of defining what such AI would be like with any precision, in future-speak, to say full possession of superintelligent AI might allow anything from solving social to climate change issues, extreme life extension or even simulated world design would still be conservative in terms of outlining prospects of such future. This very breadth and vagueness of possible advancements are behind one of the key concerns about superintelligent AI: the concern about an AGI “take-off” – a rapid transformation in anyone’s model of a superintelligent machine, which would happen if AGI designers were able to create “recursive self-improvement” that leads to an intelligence explosion.

A significant part of the arguments about superintelligent AI is of the speculative kind, interesting for asking tough and novel questions more than for giving firm or substantial answers. More practically-minded analysts have also proposed medium- and long-term options in the face of superintelligent AI. They range from desiring to build value-aligned, AGI systems, to limiting or preventing their development through regulation or careful R&D management, to trying to limit their negative effects through other technology or governance measures. Some of the most deeply engaged philosophical-developed views also include transhumanist and posthumanist views, expecting radical upgrades to post-biological humans, AI, and their world.

4. Applications of AI in Various Industries

Having tackled the issue of what AI is, let me get to the issue of what AI can do and where it can do it. Spoiler: AI is everywhere. Over the last decade, it has been involved in the healthcare industry. It has been a voice of reason in helping consumers or figuring out who talks to them. It has also helped lenders determine which consumers to lend to. AI vehicles move around on their own, and in the insurance industry, they are starting to handle damage assessment claims. In other words, AI is here, and now.

In tourism, AI allows people to create their best holiday on trips. In education, it’s tailor-made active guidance. The use of AI has the potential to improve our quality of human life. I know I’m supposed to speak in italics here, but let’s be real for a second. We make history as we speak, and it’s getting weirder and weirder. Right now, we live in a society rooted in the concept of data and AI. The line between reality and CGI is blurring – and it’s okay if you like it, because I do too. AI already exceeds our own. In 2015, an image of red islets was created by deep artificial neural networks. Scientists have taken advantage of AI to create an algorithm that allows birds to be heard. AI software produces music that sounds like Bach’s or the Beatles’ music. Deep transforms ordinary images into high-quality pictures.

4.1. Healthcare

Today, we can say that the hype around AI in healthcare isn’t just blind wishful thinking, but a very realistic depiction of the future. Artificial Intelligence is close to conquering all facets of pharmaceuticals and is making a notable presence in research and treatment. AI-enhanced diagnostics include radiology, pathology, genomics, cardiology, and even investment towards the discovery of new drugs. AI-capable equipment can help determine the best placement, type, and dose of implanted device. Treatment optimization and enabling personalized medicine are big steps with AI. A second model is trained to predict the effects of each drug on custom liver cells generated from the patient’s own stem cells. This eventually can replace the psychologist in making critical empathetic decisions because the existing psychological model hypothesis created would help the interviewer to plunge into the reflections more deeply. Patients having a clinical or research question can visit niche apps to talk to a bot. Freenome from California uses AI to analyze liquid biopsy data for the early detection of cancer. The race to integrate healthcare and AI is already well underway. It is no longer the future; it is the present. Building and enforcing a framework and a method to replicate these instances and amplify their potential is extremely important.

The goal is to give a personalized treatment strategist and support apparatus to doctors such as robotics performing surgeries. This use-case of AI gives a personalized treatment planner. Google’s “Deepmind” AI can analyze retinal scans which in turn can predict age or gender of the individual. It can also train the AI system to recognize approximately 50 diseases in OCT and predict which diseases might be present. Smartarium is currently in research, its AI will collect patient’s data to decide and apply them to a personal medication plan.

4.2. Finance

Finance. Financial analytics, financial operations, and regulations are the fields enhanced by AI. In financial analytics, AI is often used for forecasting economic and stock market trends, modeling, and making decision support systems as investment advice. Computers make up 50% of trading volume in some financial markets, primarily based on the automated evaluation of financial data by machine-learning algorithms. AI can support financial operations by evaluating available options regarding future cash flows. AI is used in trading for algorithms to classify potential mispricings and to execute transactions such as the estimation of order fulfillment. Here, machine learning is often used to create an algorithm for making one or multiple-step-ahead predictions of the volume of transaction requests for a financial instrument based on parameters of the trading agent and those of the instrument itself.

The area of fraud detection has utilized techniques for the analysis of financial transactions data. AI can be used in the two-fold process of detecting and monitoring customer behavior to track fraudulent use in a variety of ways. AI models have an 80% reduction in the number of “false positive” results, generated by traditional authentication-reliant mechanisms and analytics. In customer service, applications use AI to answer customer questions and provide advice. AI pulls information from knowledge management applications, analyses past client cases, and evaluates the best match from the client’s query. AI-driven chatbots deliver a range of financial services such as micro-managing users’ accounts, round-up savings as a wealth management tool, personal concierge enhanced customer service to support customer preferences, and financial information inventory.

4.3. Transportation

One of the most alluring kinds of AI application, as perceived in the previous century, is a self-driving car. While still not mature enough to enter a large-scale production yet, it is, however, being developed on a more or less serious level in a whole number of companies, including Tesla, General Motors, Ford, Lyft, Zoox, Google, Aptiv, Uber, AutoX, Nuro, and Even. The goal of the project is not only to develop cars capable of avoiding accidents but also to rearrange the public transportation system into a fleet of on-demand robot taxis. The technology basis for such approaches lies in various AI algorithms such as kernel methods, convolutional networks, and LSTMs. The field also employs computer vision, 3D image analysis, structure from motion, and various heuristics.

Another trend that AI is trying to conquer is the development of algorithms for controlling the traffic lights, jams, and overall management of the traffic. This application can be very profitable as well, considering that the usage of the roads in the big cities is far from optimized, and solving it can save up to 150 billion dollars a year. Several companies are engaged in traffic management; the most famous of them is probably Sidewalk Labs, a workshop launched by Google, which is now applying the technology in more than half of its proposals for smart cities. Siemens with the development of Simatic-S7 also has a product on the market, which can control traffic lights. Another obvious application for transportation is predictive maintenance: determining some time before a vehicle or infrastructure needs repair. The reasons are clear, as the cost of repairing the car first jumps up to a few hundred dollars, and the vehicle’s service becomes 3-9 times cheaper when a problem is detected more than a few hundred kilometers before failure.

4.4. Education

Machine learning is well-suited to undertaking routine educational tasks such as grading. Other applications aim to help students learn and regard this feedback as a form of instruction. These adaptive learning systems are informed by reinforcement learning and belong to the category of technologies called educational data mining. They themselves are trained by repeatedly presenting themselves to students as a form of self-improvement and learning by negative feedback. Some authors regard the process of profiling learners in this way as a form of determinism.

Some of the purposes supported by AI include instruction, instructional support, network support, support for student interactions, and administrative applications. Like those in other lifelong learning disciplines, educators and education researchers point out the effectiveness of these systems as sources of research or evaluation data as well as detailed, immediate, and informed feedback to students, reducing the grading process for instructors. They are automated and reliable, provided that sufficient data is available, are able to teach large groups of students effectively, and are easy to maintain. These are commercially available products but can be used freely in some educational settings.

Trends also comment that an increasing amount of learning will occur in these environments. Adaptive learning systems which model cognition at the level of individual cognitive ‘atoms’ were supported in a 2005 report on Artificial General Intelligence under U.S. National Science Foundation sponsorship as an illustration of the state of AI and cognitive science. They, like many in this field, are hyperbolic: “The greatest potential impact of educational AI activities will be the development of tutors available to students on demand, anywhere there is a networked computer.

5. Ethical Considerations in AI Development

Referring to the ethical considerations associated with the future developments of AI, there are several aspects where the societal implications of AI deployment might be relevant. We focus on the following:

1.1. Bias and fairness 1.2. Privacy and security 1.3. Autonomy and accountability

Bias and fairness are the first things that come to mind when thinking about ethical aspects of AI, and for good reason as these are very important issues. Since moral beings must not use people as means for narrow individual gain, no agent should make decisions that harm others unfairly: fairness is a central ethical principle for decision systems. In fact, decisions made by machines that are perceived as unfair are commonly seen as unacceptable (especially when operating in settings with real-world consequences, such as automated hiring systems). There has been a very significant amount of work done on the problem of bias in machine learning, and the extent of how serious problems are is a subject of particularly fair disagreement. It is known to be practically infeasible to make systems that are ideally fair by every standard. This suggests some research that is important for the long run. However, it seems unlikely that we will ever have fully satisfying explanations about “why did the classifier recommend you for a loan?” for unfiltered XAI models on a par with expert systems of the 1980s and recent neural network explanations (which typically consist of “here is a visualization of an individual feature’s weights”). It could be more productive to focus also on the issues of accountability and legal regulation, and on producing the strongest XAI models that we currently can, given our ultimate inability to solve the filtering problem. A similar debate rages over “explainable” versus “unexplainable” AI; ultimately.

5.1. Bias and Fairness

The reliance on statistical patterns of the past, as opposed to principles of the future, also causes challenges of discrimination by proxy. In some cases, algorithms do not discriminate against individuals because of their race, gender, or other sensitive attributes. Instead, they may use a range of other indicators that gather these attributes (sometimes unknowingly) to model an individual’s sensitive attributes’ effect. This is where such a tool or service is disproportionately vulnerable to one group or affects one race, sex, etc., more than others.

In some cases, this can happen unintentionally, as when results from previous hiring decisions have a disparate effect. Other times, when people who hold explicit prejudice develop algorithms, it is intentional. Casually, this is referred to as disparate impact or indirect discrimination. There is, nonetheless, significant debate within AI and law about the extent to which an algorithm can be held accountable for this effect. Society has used the notion of equality of opportunity as a conceptual framework for considering the concepts of unfairness and inequality.

In a world of gender or racial disparities, treating people equally will perpetuate these differences, while treating them inequitably will result in outcomes that are neither insensitive to these variations nor specially positive. This backboard of racial and gender-tested machines allows users in this broader machine evaluation to find technical properties that catch some ethical and societal worries and, by doing so, work the potential to promote data-based algorithmic solutions in order to secure AI deployments are technically sound. The creation of AI programs that disguise influence, hide, or upsurge the prominence of indicated qualities fosters the development of AI with ethical or discriminatory concerns around its use.

5.2. Privacy and Security

Just as data has powered AI’s advances, AI is set to have a profound impact on data and associated technologies such as encryption and information retrieval. AI is built with more and more data – in particular, personal data. As a result, expert studies predict that AI will reorient the technology of privacy. “Protecting privacy and data will be one of the top computational systems problems facing AI in 2030,” predicts the AI@Oxford report. AI’s ability to analyze extremely large and diverse sets of data will “deepen the capabilities and proliferation of surveillance across society, and arguments in favor of privacy will have to challenge the assumptions inherent in the notion that privacy is valuable because it permits us to have a space in which our distinctiveness unfolds,” maintains the AI2020 report. It is stressed that this will all be up to us: “It will fall to us to decide if and when to deploy these tools.”

There is no question that AI poses immense privacy and data-related security concerns. In addition to global trends such as surveillance and the use of AI-driven personal data for micro-targeting by political campaigns, we are particularly interested in AI’s implications for both the protection of privacy and upcoming policy. Ethically and in terms of policy, we have very different use-cases for AI developed with varying attitudes to privacy. Policymakers may find themselves faced with AI that violates individuals’ privacy but helps design COVID-19 protocols and rapid cures; or they may find themselves in the position of regulating AI so as to maintain the principles characterizing democratic society. By the same token, distinct accounts of privacy in machine learning make surveillance cameras and cybersecurity platforms evaluate how other trends help or hinder democracies and alternative forms of government.

5.3. Autonomy and Accountability

The kind of autonomy that some actors have ascribed to AI – the system’s capacity to choose the correct response to novel situations based on its rich cognitive model of the world, by evaluating decontextualized, abstract, and teleological consequences – suggests that our governance frameworks should be reconfigured accordingly. For instance, is it already legitimate to allocate accountability for all decisions that require the use of expert- or superhuman-level AI capabilities to the AI agents rather than to their operators, to allow systems to operate freely? Whenever we discuss autonomy, we should refer to the possibility of producing consent because, as was pointed out, accountability requires an intersubjective allocation strategy.

This point leads to a societal level set of questions: does the investment path and plans for AI development/allocation in governance and other application areas need to be made transparent to all direct or indirect stakeholders in society? Also, does the variety and quantity of AI functionality and consequent decision model need to be made publicly available and verifiable? And what process would build consent for such an allocation of system autonomous decision making? What are the limits for the use of autonomy? The accountability model of decision rationale allocation is designed to fit well with possible models of responsibility and sets the boundaries of technology and human responsibility.

6. The Future of Work in an AI-Driven World

Will machines take over your job? Will there still be job opportunities in the era of AI? There is a little of both. On the one hand, the evolution of AI and machine learning is capable of doing things better, faster, and more effectively than a human being. Automation will eliminate certain types of work that are much more efficient to be performed by robots. On the other hand, it also creates new possibilities: demand for jobs that center on creativity and empathy will increase. Automation isn’t eliminating jobs, it is eliminating tasks that require less experience and understanding.

The loss of jobs by the economy will certainly be followed by the creation of new jobs that are varieties of digital technologies, including artificial intelligence. A study published by the World Economic Forum, titled “Outlook on the Global Agenda 2015”, shows a changing nature of work, which will lead to an increase in the trend of creative careers. The study predicts that robotics and artificial intelligence will increasingly do more “routine” work so that 75 million jobs in the world will disappear by 2020. However, this year will also create new opportunities, as many as 133 million. This means that industry vacancy creations are still much larger than layoffs. The world is likely to continue to need humans, but it will force industries to recruit workers with different skills. Companies that invest in reskilling and upskilling initiatives will stand a better chance of surviving in the changing world of work.

6.1. Automation and Job Displacement

Automation will bring about job displacement. Although some tasks within a job have a high likelihood of being reinvented, automated, or phased out with advances in AI, applying this premise directly to the situation of any single job is a misapplication. While it is theoretically possible for any job to be automated, technological advances will be changing the nature of labor demand and the structure of labor markets in ways that are difficult to anticipate: they can create demand for new types of labor even while they displace workers from their current tasks. As well as displacing certain jobs, automation could be used for tasks within jobs, changing the type and mix of skills that employers value – creating challenges. Given the broad range of potential within-job changes, assessing the overall impact of AI or other technology on the demand for labor (in terms of a headcount) would be fundamentally uncertain. While worker displacement across industries and geographies is certain to occur, the differential effects on skills distribution could be more pronounced.

Moreover, while it will drive increases in productivity, the growth in aggregate economic output could contribute to many societal challenges: rising inequality, unemployment or underemployment, particularly among the less-skilled and other marginalized populations. Large numbers of workers displaced from their job will be ill-equipped to meet the demands of occupations, which are relatively more complex and technologically adept. Juxtaposition between the tasks that workers can and cannot do is epitomic of negative skills mismatch and could profoundly hinder progress. In addition to factoring in underdeveloped skills, any policy seeking to alleviate the social impact of future work would do well to consider the following 3 potential longer-term trends resulting from transformative technology – namely AI.

6.2. Reskilling and Upskilling

A cornerstone of an effective strategy to have workers benefit from AI system deployment is reskilling and especially upskilling – i.e., improving the skills workers already have. At the individual level, the need for continuous learning and adaptation is obvious, as half of today’s jobs are at risk for being at least partially displaced by AI. 73 At the enterprise-level, upskilling employees allows organizations to deploy AI systems to their full potential and ensures that there are users of AI systems. Being able to benefit from AI systems will be nudging enterprises to adopt them. Large companies – which are already facing a labour market as competitive as it was before the pandemic – have already started to focus on upskilling. 74 In such a labour market, it becomes harder – albeit not impossible – for workers to adopt a downside protection strategy (e.g. investing in transferable skills instead of enhancing their non-transferable skills).

The precise strategies vary from company to company, but they all start with workforce planning; i.e., the identification of the skills that will be crucial in the future. The list includes anything from basic technical skills (e.g. programming) to positively valued human traits (e.g. ethics). The main challenges for upskilling initiatives are of the financial and behavioural kind. On the one hand, employees who think they will not benefit from upskilling might oppose it, thus threatening successful implementation. On the other, upskilling schemes can lead to brain drain; if employees break the imagined psychological contract that links invested training and length of stay at the company, new or current staff could leave the firm.

7. AI in Science and Research

At the same time, a relatively small but extremely significant part of AI is used in science and research. These systems are gradually transforming the way science is done today. One of the first areas where AI was able to contribute was the study of climate change, and many other scientific areas are now also benefiting from it. For example, AI has already been used to develop biotechnology. This tool will replace several drug combinations in the future and optimize therapies. AI based on learning has helped scientists quickly process astronomical data on the movement of galaxies. AI systems are also modeling the climate of planets, predicting what they will be in years.

Even methods for finding new mathematical proofs are being developed. Such systems rely on the examination of consequences of mathematical theories in their XOR setting and the level of generality of the mathematical theories is questionable. Currently, two opposing views on AI are emerging: “AI is the greatest danger” and “AI is a snake oil”. Properly as AI finds more and more tasks it can do alone, its significance in this variable equation increases. What will AI mean for research in a few decades? Well, beyond AI systems beating us in games and generating a new kind of art, it will help us answer both existing questions and new scientific inquiries. For example, the American organization has used an AI model to simulate. This was studied in 1957-1958 by Darcy and Sweeney in a large experiment involving a TNC in Pennsylvania. They showed that pressure is related to pore surface and volume in a highly porous medium. This made it possible to use Darcy’s law as a simplified descriptor of TNC, known today as the “porous plate” approximation.

7.1. Drug Discovery

One of the core areas where artificial intelligence is expected to significantly improve healthcare is in drug discovery. Though the early stages of drug development typically rely on the random screening of chemical compounds for their potential to modulate a target protein, more and more pharmaceutical and biotechnology companies are embracing novel AI algorithms and computational methods in order to more effectively home in on successful new compounds. Despite these many recent advancements, significant challenges remain, and several AI-enabled biotechnology start-ups have come under fire for making highly questionable scientific claims. To many of us, it seems plausible that an algorithm-based approach to drug discovery might stand a chance of finding new therapeutics that currently remain overlooked by larger companies’ pharma programs, and that many of these new drugs could hit novel targets previously either thought undruggable or simply unknown.

One hallmark of today’s big tech industry is their seemingly unquenchable thirst for voraciously harrowing the halls of academia and recruiting away the best and brightest researchers before they graduate. These same companies hope to woo the current crop of Ph.D. candidates and postdocs directly into the R&D centers of Silicon Valley by offering salaries that, in certain cases, exceed those offered by positions in academia or industry. Companies that concern themselves with the development and deployment of AI tools for pharmaceutical discovery have an overwhelming incentive to develop and showcase their algorithms to synthetic biologists and medicinal chemists, and, ultimately, investors and the public. For example, of most obvious benefit to a pharmaceutical company are these novel and demonstrably AI-designed therapeutic compounds which will, undoubtedly, meet many a rigorous validation check throughout their respective development pipelines.

7.2. Climate Change Modeling

In the sustainability domain, AI applications range from sustainability science to computer science and environmental modeling. The vision is shared that AI has the potential to contribute to climate science by scaling up model development to the increasingly fine and large data available. Also, the potential of AI lies in arriving at more accurate climate and environmental models for simulation and prediction purposes. AI contributions in the modeling domain range from bridging gaps in model parameterization and calibration to arriving at data-driven techniques that scale up the number of dimensions of problems that can be solved. Ranging widely in techniques, the AI contribution to sustainability has increased exponentially from the early stages of research.

The links between AI and sustainability have been researched from a number of perspectives including environmental modeling and empirical results, data analytics and environmental insights, sustainability software and applications, as well as game theory and policy modeling. AI can be used to develop techniques to calibrate and validate finer-scale models of environmental phenomena, and can also be used to develop approaches to transcend traditional model development, verification, and validation. These techniques complement or provide alternatives to traditional computational and mathematical modeling. One of the most important applications of AI in climate science is climate modeling at a finer scale for more accurate predictions of environmental change. For example, the use of AI in remote sensing will provide vast and large data for the predictive development of models based on contemporary scientific time series data.

7.3. Space Exploration

Artificial intelligence is indispensable for the success of space missions. The major applications are in mission planning, robotics, and the management of large and complex autonomous systems. They are also used for data analysis. For instance, data from space telescopes is so voluminous that without the application of AI, it would be impossible to find anything interesting in it – just too much noise to find the signal – humans would probably need to be programmed in order to carry out the required analysis! AI was first used in space for autonomous navigation: V’ger, in the movie Star Trek, uses AI for this purpose. Our real far-away space probes also use AI: they cannot ask flight controllers back on Earth in real time what to do next, as the round trip of their radio signals might take years!

Thanks to sensors and basic AI, satellites can image large parts of our Earth’s surface in real-time or near-real-time, enabling governments and other organizations to monitor air and sea traffic, agriculture, urban development, natural disasters, etc., also serving reconnaissance and predicting where fighting is about to occur. AI will add a lot of smarts to future space satellites: they will be able to tell which data is most interesting and should be transmitted first, possibly to together build a kind of bulletin of world happenings, thus further reducing the amounts of transmitted data and, concomitantly, the number of human operators on Earth. For various reasons, manned space travel is not expected to grow significantly. Robots have the risks, radiation resistance, etc., to go to Mars (and they are cheaper and cost only money). Deep space missions to asteroids, and the like, will benefit from AI too. In terra-aqua-space research, AI is used to simulate how astronauts would feel and how they would act given various situations on long space missions. At JPL (Jet Propulsion Laboratory), only a few scientists are needed to run all of the current missions. Many more missions and discoveries than would occur without AI are on the way because of this synergy between space and computers.

8. AI in Arts and Creativity

The aforementioned technologies can also revolutionize artistic creativity as we know it, not in the least place because they increase the speed of production to a point where not only realistic avenues are available, but also aesthetic areas produce structural art. AI for the composition of music is frequently utilized. The Metsä, Altar, La figura, Wekinator, Continuator, Iamus, and FlowComposer are all examples of music composing algorithms. These applications are commonly employed to generate performance scores rather than a printed music sheet. 40-Hour Bach is the most popular application of HTM techniques. The probabilities of generating musical algorithms have the capacity to learn new and potentially unknown qualities of a given phenomenon.

AI has generated images that have sold for six figures, and the submission of photos, films, and books to many art collections, award shows, and film festivals is on the rise. AI may be utilized in art museums to display unknown objects made by recognized artists in the field. As an example, Picasso’s unfinished work was analyzed by a group of researchers. These individuals used deep artist studio image inpainters to complete 665 paintings. As a result, we used AI to incorporate intelligent research. Deep artistic authors are presently used to paint or produce art. It is necessary to select a friendly AI engineer who has expertise in this.

Ethical and Privacy issues: The outline is a work in progress. It was showcased at the 1st DeCAI Imaging Conference in the United Kingdom. Prior to the commencement of the event, the event will be held at the SOHO Lexington House. Only two developers and one professor from all of the other programming languages are excluded. I maintain a delicate balance to protect AI right from the outset to the deserving firm. In general, researchers are allowed until the event is finished. I expect the event to support my work.

8.1. Music Composition

AI composition has become a non-trivial task as AI innovations have been taking huge evolutionary steps. They are already developing cognitive robots, labeling and swapping photos on social networks, recognizing faces quite organically, and coming up with music. You must have already heard the first album written and performed entirely by the AI. The album was recorded by a performance artist. It definitely is not a top-40 hit, for those who stuck to Glitch is a messy and abrasive album of tracks that were first composed by AI.

There were many more artistic experiments with AI in music after that. For example, Lucid Pathways uses a machine learning algorithm called Actor-Critic Conditional Attend, Read, and Tell. Daily Painters uses a melodic bass generative system and a non-chordal melodic accompaniment generative system. Deep Dreaming of your Face, for violin and an artful in-guitar tuning, implements a one-step StyleGan Generative adversarial Network that transforms the targeted image of a face into an image of a face with the mel spectrogram of the Deep Dreaming of your Countenance visualization superimposed as a hand-drawn sketch. These pieces spin in an artsy direction through the use of music as an art medium and address questions surrounding the potential impact these systems might have on our perception of art and our role and place as artists within the musical community. AI is used in the creation of musical pieces. It does this by offering new combinations during musical activities.

8.2. Visual Arts

Visual Arts – The history of AI in visual arts is more recent than in the fields described above. Image generation has developed rapidly in recent years, revealing the potential of Generative Adversarial Networks (GAN). The generated images often demonstrate symbolic and conceptual meaning, and as such are considered art. In style transfer, an effective way of transferring one visual style ‘painted’ onto another visual content has been demonstrated. As far back as the 1970s, the potential of knowledge-based systems in creativity was widely acknowledged. AI-based aesthetics systems have been taking shape, implementing expert knowledge in logical rules. Later, with the advancement of computational modeling, artists conceived of AI as a co-creator. More recently, collaboration has been redefined as a sort of dialogue with AI that is more generative and creative and that may be framed within an extension of a utopian rhetoric of AI as a potential collaborator to make art.

AI is developing an understanding of visual contents and, on the other hand, visual artists and designers are extending the use of AI. As AI learns about human creation, humans are using AI to create. The combination of these two currents contributes to this exploration on the future of art, co-authored by AI and humans. It basically points out that AI is already changing art, regardless of whether it has reached human-like creativity. On one side, it describes how AI has been used in creative processes within visual art; on the other, it analyzes the type of art that AI can generate, its limits, and its differences with human art. AI has been shown to generate art that has artistic value. Creating art has been an activity usually associated with human intelligence. But as AI keeps growing, evidence has started to emerge that machines can create art with human artistic value, without using inappropriate creativity.

9. The Role of AI in Government and Public Policy

The Big Brother Internet of Things: The Surveillance and Loss of Privacy

We are more than mere consumers when at home or in the street, inside public and private spaces. Citizens in a democracy occupy a special status as being the focus of all activities rather than just the production of their data. They are users of public services paid for by their taxes. These technologies are about looking at users through public administrators’ eyes and minds. We also have a right not to be watched, monitored, traced, manipulated or lynched using our own data. The answer to whether a “service” will be better administered and more efficiently and effectively to all is thus lost in a far wider context than mere cost analysis and depends wholly on who voters and residents are and wish to be or are allowed to become in the future.

The Use of Digital Telecommunications and ICTs in Policy-Making: Bias and Objectivity

The purpose of a Government is to govern, which is to say, it is to make decisions that those who are governed in these areas and sectors cannot do for themselves. How can AI therefore decide upon what is in the name of security when there isn’t anything or anyone physically threatening us until post facto? It can’t as this is a space for ethics and not bioethics. A further point to make here is discerning between asymmetrical values linked to the societal balance between rights and responsibilities e.g. between individuals’ right to mobility necessitating an objective risk assessment posed by the right to life of individuals and between their biological rights and responsibilities towards others in a medical application of AI. So, the value structure is found in these broader ethical questions and lays the foundation for societal and public policies centered on the public good where rules, norms, principles and practices are situated.

9.1. Surveillance and Privacy

We hear of new developments in AI quite often. Some see this as a sort of progress that we could be proud of. Others, however, get concerned with regards to surveillance, ethics, and privacy. Public surveillance using AI is increasing, especially now that we require accurate, fast, and automated responses to cope with the challenges of the COVID-19 pandemic. Two common concerns are emerging related to surveillance and privacy: the monitoring of individuals who may not want to be monitored, and discrimination and a lack of privacy guarantees for those being tracked. The healthcare sector, in particular hospitals and clinics, invest in AI for recognition and speech processing. As more of these AI systems get employed in healthcare to detect, identify, or prevent disease, it is also likely that the healthcare sector, and with them in the capacities of major employers also other industries, will be on the frontier of implementing so-called surveillance capitalism. This makes healthcare contexts interesting cases for a philosophical inquiry into surveillance because when people are being tested, monitored, selected in their professional and private lives, or discriminated against based on these data, we must ask ourselves if we are not moving into the realms of a (failing) surveillance state.

A difficult issue to settle concerns the capabilities that AI provides to do surveillance in public spaces. In a liberal democratic state, the government should provide public security via legitimate and accurate intelligence services. Being able to tap the ontology of scenes photographed in a usually busy street to locate possible suspects of a crime or a terrorist act does not seem overall to be a big problem. But tracing all the people in these scenes is a different matter. This not only lacks consent but also sidesteps their civil liberties. Frameworks to respond to these possible threats fortunately are increasing. One discussion that urgently calls for further exploration is the value of privacy in relation to these surveillance systems. The problem of privacy is complex as it touches a trade-off not only between public security and the right to privacy but also concerns the constitution of states, the laws and regulations they uphold, the (public) values they embody, and the civil rights they permit. His basic claim is that, under certain conditions, privacy constitutes a civil liberty. Swift observers have drawn the conclusion that, because of the individual basic right to privacy, “Robust AI Principles” and a “responsible and global approach to the technology” especially in respect to fundamental rights are desperately needed. Thus, the implementation of surveillance AI requires a couple of steps while treading the lines of using AI ethically, legally, and in good governance lines.

9.2. Decision-making and Governance

Se Aachen and Imperial College London present an in-depth look at the governance of AI. The AI’s impact on decision-making processes. AI enables real-time analysis that applies across all policy and governance areas. For example, it can assess citizens to determine their eligibility for public services. Decisions from AI can be hardwired to consider cost and even indirectly impact salaries and job security in public services. Yet, the profit-driven development of AIs means they are not neutral. They can be prejudiced and transparent by design, in need of critical examination.

Public service users are increasingly processed using AIs. In this context, public actors’ decisions regarding complex AIs raise questions about state legitimacy. Occasionally, trials are tempered by valiant user advocates who see that AI can be retrained and that companies or public bodies can expand resources to cope with an increased number of appeals. However, user resistance to trial-led determinations is often seen as evidence of the efficacy of the trials; both confirmed trials and protests against them may reinforce the effectiveness of the “predictive” element of administrative decision-making. Government attempts to privatize decision-making yield super profits, and the results further shape the subjects subjected to contemporary forms of automation. Despite redistributive impulses in welfare states, governments have always sought to contain or cut welfare spending. Automated public service delivery, by making individual entitlements more scarce, effectively achieves this goal.

10. The Technological Singularity

Prepare for a heavy dose of the radical as we talk about the technological singularity! This hypothesis suggests that at some point in the future, human and machine intelligence will reach an intersection leading us into artificial superintelligence (ASI). There are several definitions of the AI singularity, but the simplest run-down of the concept is: indefinitely rapid technological growth whose consequences are beyond the limits of the present human experience. Hold on to your horses! That means we’re going to see a classic men vs. machines highlight (on second thought, imagine The Matrix in real life).

The concept of a technological singularity was popularized by mathematician and science fiction author Vernor Vinge. Vinge proposed that the creation of an ASI “will be a surprise”. Vinge’s idea was later elaborated on by I. J. Good, who said machine superintelligence “could design an even more intelligent machine” and “[t]hat new machine could do the same, leading to recursive self-improvement”. This opens several causal loops of self-improvement, each one an order of magnitude more powerful than the preceding ones, but Vinge added that “the issue of the Singularity is controversial”. For this reason, modern interpretations are less concerned with the timing of the singularity and more with how it can be controlled and managed. Looking into the future, the singularity promises both radical technological change and radical, potentially dystopic social change. Optimists imagine that singularity technologies will solve global problems such as poverty and climate change while extending human lifespans and making human work more creative. Pessimists imagine intelligent computers turning humans into pets or paperclips.

11. The Existential Risks of AI

The AI Revolution: What Awaits Us?

11. The Existential Risks of AI

From its very first steps, AI raised concerns that maybe we should not go this way. And yes, unfortunately, those concerns are not just jokes, exaggerated by anti-scientists. Today it is no longer a science-fiction plot but an object of real mathematics. The point is that if a system is superintelligent, it can be more powerful than all people on Earth and do something that is not good for humanity. It’s exactly the same as the story with a small genie that fulfills all your wishes – you can want to kill all humanity and you will get what you want because you declared your wish very correctly. There is a question of what to do with an AI system that possibly can harm people. Of course, no one makes programs that intentionally harm people. However, you can give it the task and be too concise and laconic in the formulation of the task.

Of course, there is a problem in AI development that a superintelligent AI, in principle, is possible, so we should think about how not to make it harmful or restricted to work on tasks that are useful for humanity. It is true that AI, in the form that we have it now, is not absolutely superintelligent. There is a very insightful definition of superintelligent computers: when we ask a computer to add 1 and 10, and it gives the answer 7, and we have no idea why it decided so. That’s the definition of superintelligent machines. Because now we can always go and see the function 1+10 in the source code and find there “x = 7”. So, the problem of a superintelligent AI is to make machines so complicated that no people understand how it comes to conclusions, it means very sophisticated neural network models or something like that.

12. AI and Human Augmentation

In the future, humans can potentially embrace biotechnologies and implant various kinds of dim light-intensifying implants, sensory neuroprosthetics, or cybernetic implants of different sorts, and it is likely that they may want “all manner of implants, integrating computer networks with human minds in numerous ways.” Additionally, humans might opt for brain-computer interfaces, where “smart devices” can aid them in thinking. AI technologies could be integrated with the human brain to work symbiotically, enhance and restore cognitive processes, e.g., to counteract brain degeneration, improve memory and intelligence. There may be physical and cognitive wearables or implants to restore locomotive functions, monitor physical health metrics, etc.

There are different definitions of AI-human augmentation. An older tradition of “cybernetics” defines augmentation as enhancing human capabilities with computer technology: using “machines to help man.” The newer tradition of singularity or transhumanism tends to understand the idea in terms of human capabilities engineered to match those of advanced AI. In AI-human symbiosis, human beings function as biological computing systems overlaid with AI subsystems. Habib Davanloo (1990) suggested several other ways in which human decisions can be influenced by AI augments, specifically in training psychoanalysts. Herbert A. Simon suggested that knowledge-based technology can provide “better descriptions and inferences than experienced human therapists.”

The embedding of everyday activities in digitally augmented surroundings suggests that more and more, human lives will intersect with the cognitive realms of artificial intelligence (AI). AI augments, the quick proliferation of AI-powered consumer technology, from “smart assistants” to AI-driven systems supporting diagnostic imaging and customer support, are forcing a radical change in the way human cognition interfaces with the cognitive environment. Furthermore, AI augments, designed using sophisticated algorithms that replicate human discourse patterns, can function as “emotionally responsive avatars” in phone conversations. Rather than simply being repositories for humans to ask questions, AI augments apparently have human-like agency, controlling engagements and allowing for human follow-up. It is claimed that intentionally designed and marketed “entertainment technology” can be used to embed learning in everyday activities. The benefits of AI and human symbiosis are also explored in the business context. AI-bolstered human orientations (e.g., strategies, decisions, experiences) may be more competitive, though it is argued that using AI not for improved stakeholder orientation but for maintaining a strained status quo.

AI-human datasets may reflect, perpetuate, or even exacerbate existing social injustices; ill-intentioned or unintelligent conditional models can be designed that handle various victims unfairly. The embedding of everyday activities into augmented surroundings pushes cognitive activities of AI and humans closer when AI augments are used. For example, in terms of complementary or balancing competences, such as complementing routine cognitive activities, AI can do such routine tasks, allowing employees to use their time more efficiently. It is also suggested that the enhanced competencies might function as “bridge competence” to integrate and balance the capacities and incentives of various stakeholders. Therefore, AI-human integrated interaction would raise ethical concerns around these questions.

13. AI in Popular Culture

The depiction of AI in popular culture goes back to the first novels about living automatons – Frankenstein, the Golden Ass, or Pinocchio. AI was also portrayed well in the 21st century literature, which reflects, critiques, and gives an extensive interpretation of it. One of the more well-known examples is Philip K. Dick’s science fiction novels, which focus on artificial intelligence, consciousness, and ethics. A similar trend can be seen in many popular science fiction movies and TV series, such as The Terminator, The Matrix, Westworld, Ex Machina, Automata, and I, Robot as well as in video games.

The idea of the AI revolution appears in various media and pop culture: books, movies, comics. They can be seen as technological fiction, science fiction, techno thriller, and dystopian fiction. The vivid descriptions and convincing plots and characters in technological fiction and techno thriller are used as a means of “soft teaching”: they depict the progress and dangers of science and technology, promote value science and humanism, cultivate logical and scientific thinking, and improve the reader’s ability of independent thinking and practices. At the same time, many social ethical issues about artificial intelligence (AI) are conveyed, discussed, and conveyed through the characters, plots, and programs of technology fiction: digital divide; machine consciousness; robot ethics; AI arms race; etc. These issues are the keys to controlling AI risks and making AI work for human better and more. In conclusion, explaining the narratives in pop culture and prosocial values contained in them about AI, this study is expected to make a contribution to expecting our AI future and understanding the ways pop culture can be used for public science education about AI.

14. Conclusion and Future Prospects

By way of conclusion, let us briefly survey the ideas laid out in the previous parts of this paper. This will allow us to form a new impression of a possible future that the epoch of the incipient AI revolution may soon bestow upon us. It may result in a kind of breakout beyond our closed ontological world, of which we have hitherto been the inexpungible essence.

Transforming investigator from subject into instrument, reflecting the manufactured world in general, opposite to which the investigators have until now stood, mediated by a series of particularly homogeneous and static, as well as, for this very reason, especially distanced, pictures, and, lastly, working with them in such a manner that they can be reduced, without cognitive right (tastesve that fruitful inconsistency in the choice of the assumption). Much also follows from the results in ProlS ex nihl. AI, demanding a conception of is, existence, being, and existence on its own account (ipsum esse), or what name ‘the necessity inherent in existence itself without the existence of anything’.

Related Articles

Back to top button