AI and Digital Sovereignty

1. Introduction to AI and Digital Sovereignty
For many European and other advanced industrialized ‘third’ countries, digital sovereignty is rapidly becoming part of their national DNA, a mantra that justifies certain domestic and foreign policy positions. Such digital sovereignty strategies are often used to buffer countries from perceived excesses and prejudices of US-centric or Sino-centric platforms and AI. Yet AI and digital sovereignty remain poorly understood, at best a set of dissimilar values, claims, recipes, and designs.
The social, political, cultural, and economic facets and vectors of digital sovereignty are vast and vide some of the greatest worthy-of-further-study questions in ICT politics, ethics, and socio-economics. This essay provides a critique of the models and designs of digital sovereignty that have emerged so far, to focus specifically on the intersection of AI and digital sovereignty. In the first section, I discuss digital sovereignty as a normative claim and show how the philosophy and ethics of digital sovereignty can be optimized by a theory of perspectivism anchored in hermeneutics. In the second section, I discuss AI as an intelligent, instead of alternatives, collective machine-based supertechnology. Beginning to weave these threads together, the third section begins to consider the problems and prospects for the relationship between AI and digital sovereignty as concerns the responsible development, human platform oligopolitical imaginaries, and sets the scene for the culminating section in which Sabel and Simon’s vision of business and domestication points to how we might reframe digital sovereignty in relation to AI, without dystopian hyperbole.
2. Historical Perspectives on Digital Sovereignty
Traditional diplomacy has been about the management of sovereignty. The more universal diplomatic tools were things like alliances and embargoes that could be brought to bear upon those parts of a sovereign state that a global community saw as unacceptable. The emergence of digital diplomacy has, following the normative trajectory of acceptance of digital technologies, come more to promote a modality of acceptance of global digital processes in which freedom of digital trade prevails. The issue recently, though, is increasingly about ‘balancing’ between digital freedom and social norms.
This chapter proposes a patchwork of the origins of the term ‘digital sovereignty’. However, there are different historical highlights in various national traditions. In a French context, the term is seen in the context of a shift from a concern about the domination by a foreign state over the state sovereign to the domination of foreign platforms over the sovereign state. What they ‘want’ is leverage in their cognitive capacity – intellectual property and valuable participants in data production networks. In German tradition, the notion is enmeshed in a broader legal, constitutional, and political discussion in which the ideals of individual self-determination must be realized at the level of standardization in (autogenerative) codetermination of digital technology. In a regional context of Commonwealth countries, digital sovereignty is a contemporary colloquial term designating the state of being not subject to improper external influence.
3. Key Concepts in AI and Digital Sovereignty
A shared understanding of basic concepts is necessary for holding a productive dialogue on the relationship between AI and digital sovereignty. The imperatives of clarity and understanding demand that we, at least, establish the framework of basic terms and principles that underpin the debates around such subjects as artificial intelligence and digital sovereignty. For the sake of clarity, it is incumbent on us to define our terms and to situate them meaningfully in the broader theoretical and practical edifices to which they pertain.
That AI as a discrete field of scientific innovation and economic opportunity has outpaced any critical appraisal of the concept itself is an especially significant aspect of contemporary technological development. AI as a term has come to encompass an array of machine-based systems and methods so broad and so diverse that it can be difficult to discern its conceptual unity. However, it is helpful to understand AI less as a specific set of technologies or lines of inquiry — neural networks, natural language processing, governance mechanisms, or algorithmic fairness — and more as a theoretical position, a point of view, or a way of doing things. AI is therefore best understood as an instance or an example of what the Canadian philosopher and computer scientist Michael Wheeler calls the “practical, on-the-fly-tinkering-with-scheme-of-interaction way of thinking.” In other words, AI is a particular way of approaching problems of scientific and technical import. Today, that way of thinking is widely practiced and influential. But it is not definitive of all human activity or the extent of scientific discovery. The practical way of thinking, as Wheeler shows, coexists with and operates alongside another heuristic: the “scientific modeling” way of thinking.
3.1. Artificial Intelligence (AI)
Introduced as the final frontier of technological evolution, Artificial Intelligence (AI), on the contrary to its very cybernetically defined feature, is commonly thought of as a Lockian “empirical” problem-solving mechanism by a considerable number of the scientific community. Along these lines, the “proxy thinking” discussed previously and shelved for the better part of information technology’s (IT) existence, can be said to be embodied by this new incarnation of the technological Other. This present iteration of the technological phenomena persists in maintaining several key features at its operational core that relate to the plurality of human biological, sociological and psychological dynamics by virtue of being a “paradigmatic shift” in computational history. This level of complexity is due to the computerisation of biologically derived hit-and-miss forms of cognition coupled with probabilistic, and multi-layered forms of so-called “fuzzy” algebraic computation; as well as the theory of Epsilon transitions that is also able “to deal with problems that are computationally infeasible for DTM with today’s technology.” This hyper-systemic, multi-disciplinary computational integration upon biologically transcendent forms of intelligence is enabled via sets of Interferon Neural Networks (NN) that allows the AI to possess, or more correctly program, abilities that are typical of human psychology such as: learning; inductive and deductive reasoning; cognitive associations, etc.
These features, indeed “side-effects,” are actually “intrinsic” to the technological manifest because they are the product of the AI’s ability to data mine a vastly complex range of information in a manner pivotal to modern IT knowledge management. Ultimately, AI, then, is best defined by its function as a mechanism of sociocultural “knowledge distribution” (i.e. “conduit computer networks”) as well as the application of said input so as to produce distributed outputs.
4. Ethical and Legal Implications of AI
Whereas the first two parts of this paper have concentrated on the purely technological underpinnings of AI applications, we now turn to their ethical and social, as well as legal dimensions. In fact, increasing the use of AI technologies in society suggests various consequences, which may also raise ethical and legal questions.
Before delving into these issues, it is essential to point out in this connecting item that in many countries around the world, there is a unanimous recognition of the complexity of the repercussions of AI technologies, which has prompted a comprehensive review of the frameworks designed to address and regulate them. France was one of the forerunners in this effort with the adoption of the Political Framework for AI in France in 2018. The political framework for the French government’s strategy identifies various priority areas and suggests several legislative reforms. One of the key emerging themes of the report is the sovereignty of nations, and in particular the notion of “digital sovereignty”, a notion that the report sets as a guiding principle for a human-centered AI. The aim of France, the report emphasizes, is to ensure digital sovereignty and ethical independence (p.34). This also stands at the core of the European Union’s framework and guidelines for AI and is studied also in a dedicated item of this study.
5. National and International Policies on Digital Sovereignty
Given that ‘national and international policies tend to be the fruit of compromises between the positions of different member states’, in a meta-analysis of findings from the Horizon 2020 project, the different approaches will be outlined before recent cases and strategies are illustrated. Though Dafermos finds there is a wide gulf between discourses and strategies of digital sovereignty that requires linkage of further strategic policy objectives in support of collective interests and use of AI, the fact that these discourses differ shows real determinate evidence that there is a systematic policy landscape and many generations of policy intervention associated with the various forms of AI.
Slater et al. argue that ‘much of the content and panoply of digital sovereignty aligns with long-standing research traditions in security studies: cyber security, state cyber strategies, and the role of international law’, especially with research discussions that ‘construe cyber attacks as an existential threat’, defined as ‘attacks on critical infrastructure to the extent that the damage would extend beyond the battlefield to civilians and civilian targets’, or perceptions of ‘information control as a survival strategy for repressive states with real fears of a colour revolution’ and large-scale surveillance as ‘a technology of modern power central to envisioning and conducting contemporary wars’. Others make similar assumptions from the point of view of managerial authorities with criticisms of critique run through new tools and governance capacities.
6. Challenges and Opportunities in Achieving Digital Sovereignty
Challenges
1. There are two key challenges. First, the development of AI has allowed the emergence of digital sovereign value chains which are competitive on a global scale. Second, digital sovereignty in the field of AI is perceived as a dual-use technology, with a military application, fraught with strategic connotations. Despite these challenges, we increasingly see efforts to promote digital sovereignty relating to AI. Some may be “mere” attempts at “re-nationalizing” the control of technologies and digital value chains, which underlines that digital sovereignty is as much a matter of fact as it is a normative claim. Despite the challenges, we believe that a long-term perspective on digital sovereignty in the field of AI will reveal the opportunities enabled by AI.
Opportunities
1. Initiatives develop in a context in which “AI is reorganizing knowledge production through the collection and analysis of an unprecedented number of digital traces and classified data.” AI, as increasingly democratized and autonomized, has social imaginaries: visions for autonomous robots. Paris: Delémont/Frenay/Lemoine. PMBC auto position paper n°4 accessible from the workshop on Autonomous Robots (Social Robotics 2018, International Conference). Standard on social, political, industrial, and cultural levels, in Europe may open up new “strategic autonomy opportunities” in the relation between the AI sector and the infrastructures it depends on. Thus, data as much as the algorithms used to weigh, interpret, store, and exchange them are facets of the same phenomenon. While sovereignty allows the making of war, every particular sovereignty is ultimately about shaping relations as much as about firing a kinetic bullet. And vice-versa: every added point of sovereignty enhances the sovereign’s war-making capacity – capabilities for whom Silicon Valley’s big tech clunkers, with their engines and wheels configured in series, are far from sufficient.
7. AI and Data Privacy
AI technologies and data privacy are two key areas of digital ethics and critical considerations. The possibilities and features of AI technologies are inextricable from a broader understanding of data analytics. Data is the foundation of several machine learning-based applications that raise diverse data privacy concerns. AI technologies, both in the development phase and after deployment, may also advance current data privacy standards. Privacy by design aims to incorporate privacy measures in technology to promote user protection from initial design phases. This approach is especially useful in the case of AI technologies because of the vast amount of and access to private data that might have been used.
However, the close relationship between the analogous development of AI and privacy raises major concerns. First, AI operations can compromise data privacy notices at the core of data privacy requirements. As many data practices are underpinned by trade secrets and intellectual property (IP) protections, AI-driven trade secrets could exacerbate the “black box” problem in the enforcement of privacy regulations. Innovative data analytics can disrupt de-identification requirements and legal standards as previously anonymized data is re-associated using new AI-driven methods. Subverting limits on data privacy usage, technical AI advancements can overcome the re-utilization of data with vast numbers of data features or marginalized data categories. Public awareness of these challenges is low, presenting the general public with a major issue. Public government efforts to address AI-driven privacy concerns have mainly focused on predictive analytics and discrimination. This has left public discourse on other data privacy issues subsumed by wider discussions of AI ethics and “robot rights.”
8. AI and Cybersecurity
The interface of artificial intelligence (AI) and cybersecurity has been identified as an important topic within both academia and the private industry. AI has been referred to as a double-edged sword that can be both a cybersecurity solution and a threat simultaneously. Synergies between AI and cybersecurity exist on a number of levels. Beyond straight defense, predictive maintenance is another area that may provide potential benefits to cybersecurity. AI can also be used for automatically correlating events and anomalies, and for finding patterns in network data that indicate malicious behavior. For the attacker, AI’s development provides tools with which new and more advanced attacks have been and will continue to be developed. Consequently, there is concern that offensive cyber capabilities (including offensive AI) will be increasingly employed in conflict.
In the context of cybersecurity specifically, national governments play a particularly important role in the regulation of cyberspace. Enforcing a more general complex systems view of AI underscores the points where regulation of AI may have cascade effects for the general security of core systems, and in doing so exposes creative pathways to reinforce the system architectures of AI and cybersecurity in productive, harmonizing directions. Encouraging convergences towards a systemic complexity view across a range of technology policies and governance practices emerging in the global ecosystem of AI research, industry application of AI, and the international security context provides an epicenter at which AI for cybersecurity emerges into sight as an especially complex and promising future use of AI and big data technologies in the digital sovereign context.
9. AI and Economic Impacts
Automation and machine learning alter a wide variety of socio-economic systems, including entire branches of industry like mobility and finance, the organization of labor, like the emerging gig ecosystems and the labor market access for the unskilled, but also the relation of consumers and producers of information goods, like the profiles used for marketing purposes on a free internet. Moreover, these technologies also have the potential to “re-organize the economy as a whole” by deeply transforming factors like global value chains, competition strategies, interest rates, property right systems, and the base of national income, and even the hypotheses that underlie the classical game-theoretical models that economists use up to now. They also are incentives or points of contention in national and international politics, for example in world trade negotiations or bilateral agreements like the digital single market for the EU. Whenever technological change over the last decades has accelerated, it also has led to the reallocation of resources into new industries of virtual, mobile, and cloud economies.
The economic impacts of artificial intelligence are already so far-reaching that deploying such systems can be considered positive or negative from a national standpoint. As of now, three patterns or structures of (national) economic advantages can be spotted, that could be nurtured by a digital sovereignty-friendly engagement with AI. First, AI both complements traditional domestic goods or services due to a demand effect connected to higher consumer confidence or reduces their costs due to a supply-side effect if AI is included as an input factor and on to benefits classical import substitution in those industries.
10. AI and Social Impacts
Social dimensions are attracting increasing interest in the debate on artificial intelligence (AI) and digital sovereignty. They form part of the broader cluster of internal and external leverage points. While the former intends to address the capability of the EU to cover its needs – in strategic goods, services, knowledge, or skills – to independently shape its preferred future and remain master of its own fate, the latter extends the principle to the degree of dependence of countries and their businesses, data, algorithms, infrastructures, software, and hardware to other third countries that might lead to an unfree choice or would run contrary to basic EU values. Both are intertwined and unfolding within three different realms, intersecting those regulatory and substantive issues already addressed: the domain of well-being (Section 9.1), which describes the potential AI social impacts; the sphere of re/de-regulated societies (Section 9.2), which refers to the new digital political economy; and social justice and fairness (Section 9.3), which delves into the potential AI cross-border and global-justice impacts, their ethical underpinning as well as the limitations, so far, of confronting global challenges within EU AI digital sovereignty ambitions.
This tripartite elaborates on the dynamic and complex playing field where societal structures and processes interact and feedback on those AI applications under scrutiny by the non-social dimensions considered so far. Our general ambition in this section, therefore, is to show how AI relates to institutional and socio-technical interlinkages that materialize in multiple societal, cultural, economic, political, environmental environments. By contrasting the European Union’s agenda with other world regions, the possibility also emerges to reflect on the fundamental commonalities that make such environments global, and where exactly they intersect the actual specificities that no AI strategies seem to mention when they address digital sovereignty or artificial intelligence within the relation between society and knowledge.
11. AI and Cultural Impacts
Generative models of AI, specifically a subset of GPT-3, can create poems, works of art, and stories that border on consciousness. They produce meaning, albeit framed by unique data. Regardless, there are ethical questions to ponder concerning creatorship and the meaning of anthropocentric values and culture in the age of algorithms.
Language and arts are windows into the myriad cultures and beliefs that make humanity diverse. How AI, from a cultural aspect, is embraced and used also reflects a shift in terms of the values held in contemporary society. When considering the cultural dimension, AI commoditization also reflects what a society values. There are many cultures where song and dreams are worth far more in terms of support than just a handful of dollars.
Given the speed with which AI is growing, its cultural impact on contemporary society is difficult to enumerate. I believe the cultural significance, on a small scale, can already be garnered from the counter culture movements of “bothsidesism” that have chosen to reject elements of “big tech” and associated AI products. Culturally, AI is causing substantial reckonings about archiving practices and legal definitions of “authorship” and “ownership.” Said another way, almost all of human language and a significant percentage of our art subverts copyright due to these new AI authors. Bilateral accords would need to be redrawn if Europe were to terminate copyright temporarily. Mass deletion of creative works, orphaning creators of beneficial revenue in exchange for the public domain.
12. AI and Geopolitical Implications
While the questions of geopolitics are mainly focused on looking at AI as a disruptive technology and the politically motivated surge in AI, this angle tends to assume that AI does not merely shape the political power configurations but is also shaped by them. A more variegated gig will take into account technological, economic, and political factors surrounding AI, such as the current state of innovation, possibilities and limitations, the absence of global platforms (such as a global cloud, as we will see), the importance of hardware in addition to software, geographic unevenness in data availability, the world-ordering role of trade, industrial and competition policies characterizing a technological cold war. This approach would imply a more pragmatic perspective, seeing AI as a technology with strategic implications, with all the potential it can offer to great powers and their allies.
The contemporary drivers of the digital sovereignty debate are largely related to the advent of AI. AI is viewed as a geopolitical force, and nations that possess AI capabilities and large amounts of available data are seen as holding significant advantages over others in terms of global and regional power. AI is expected to revolutionize not just national, regional but global geopolitics. In addition to being the cutting edge of technology, AI brings with it the ability to develop even more sophisticated and high-end technologies in various niches. AI’s theoretical potential for cyber warfare, and its relevance as a field of dual-technologies, have given it much leverage in policymaking circles. Some researchers suggest that AI may be just another S&T revolution in an ongoing cycle of change in world order, sharpened and redefined by changes in economy, technology, polity, and culture. AI’s implications, we can say, focus on the decline and maintenance of power. AI is crucial to the rise and fall of nation-states in their race for global power. For some researchers, AI brings with it the ability to shift world power with a new configuration of the techno-economic ecological model. In addition, AI disrupts the Military-Industrial-Academic Complex that has governed world order since the Industrial Revolution. AI’s capacity for extreme convergence with emerging technologies makes the use of AI become progressively less a standalone tool or weapon. Rather, AI seems to contribute to the additional power of the classical and cutting-edge technologies in the making. Over time, AI is the capacity that can contribute to making or breaking world power. AI is and AI acquiring countries are less a single state variable. Given the preponderance of proprietary technologies, strategic convergence and reliance on AI will be faced with a multiplicity of allied AI dichotomies, weakening strong alliances and the capacity of both with technologies from antagonist home countries to reinstate bilateral power equivalence. AI is directly related to economic prowess and has the capacity to displace US-primacy over the global ecosystem, leading to the reclamation of its sovereignty and redistribution of both global military and economic power.
13. Case Studies in Digital Sovereignty
In this era of AI, we need case studies to translate the digital sovereignty agenda into concrete political, economic, and social action. What does digital sovereignty mean in precise terms in the different fields of application of AI, services, and infrastructure? What challenges and opportunities are materializing in the institutions and the geoeconomic power that is built around the technology of AI? That is, how can we mine the actual ‘policy space’ to introduce greater degrees of public power and emerge alternative AI services and infrastructure ecosystems that materialize our substantive goals and visions in the present, very concrete conditions?
In this section, first, we will see the particular case of the Swiss Post and how the implementation of AI in the public sector can lead to new forms of data flow and governance that address its political chartered mission. The Swiss case is interesting not only because the Swiss Post has not been as extensively involved in illegal activities as its German counterpart but because the intimate relationship of the Swiss financial system with organized crime is central to their financial system. Second, we will move towards the problem of public tenders to our digital sovereign solutions. The issue of when, how and who to buy from to support substantive digital sovereignty visions is becoming politicized. In the Austrian case mentioned, evidence pointed towards the ulterior motive of the scandalous acquisition. The UK’s recent scandal of favoring specific pals by forgoing the normal competitive process can show not only how political DS will become.
14. Future Trends in AI and Digital Sovereignty
Future trends in AI and digital sovereignty: What are the emerging trends in AI and DS that are most relevant to this agenda? There is too little work in this area, although some priorities have emerged from the EoI information session and the EoI responses. These include: A social science research agenda that identifies prospective negative externalities resulting from the AI-DS transformation, such as inequalities, environmental damage, and ‘monopolistic’ control of data. A major gap in empirical research where it takes these externalities as objectives and subjects them to investigation. Anecdotal and early empirical evidence suggests that policy and technological interventions to limit AI/DS may be the reverse of what is needed if challenges emerge rather than worsen. These investigations, connected through a ‘real world laboratories’ approach, may require the development of new methods in science, technology, and data analysis. A related set of research priorities involves the development of technology roadmaps to transition towards federated AI and Edge computing, that could underpin a ‘green transformation’ embedding AI-DS.
Foregrounding prospective and prospective negative impacts of future AI applications on social and the ecological world inequalities and environmental degradation. Developing an empirical inquiry with the aim to inform the development of data-driven decision-making concerning the allocation of social and ecological resources. Some studies already demonstrate the limits and risks of current technology; however, no research on whether, as specific AI/DS applications increase in scale or ubiquitously are applied, they reveal phenomena that are differently manifested and that are sensitive to technological interventions. Managing real-world experimentation: a third research priority should develop roadmaps to empower federated AI and Edge computing, aimed at mitigating (if consequences are undershot) or ‘combating’ accelerations (if ones are overshot) to support global goals. Aiming to see how the ecological benefits of federated AI and Edge computing can be optimized. The following four proposed demos are illustrative and indicative. They would also inform the practical challenges of developing such systems as replacing our Fourth Labs organization. The exact approach to these demos would require careful experimental design to maximize insights.
15. Conclusion and Recommendations
Without algorithms trained on big data and running continuously on digital infrastructures, the many applications of artificial intelligence (AI) that are changing or will change our lives are not plausible. If we unpack them further, we see that there are real and active agents in the design, creation, control, and use of AI, and the set of such agents possess much diversity. This fact alone – that AI is a construct – bears significance for how we perceive and recommend policies for AI and also determine standpoints in ethical arguments concerning AI in the contexts of digital transformation. But these observations by no means exhaust the issues we currently face concerning AI, humans, society, and the digital.
We began the essay with reflections on the self-understanding of societies concerning their digitalized futures and proposed a vision of digital sovereignty. Drawing on all of the above, we propose recommendations that aim at guiding actions and future research to effectuate the vision of digital sovereignty we have put forward. The recommendations address the needs and steps for fostering both diversity and engagement of the values that AI and AI design and policies incorporate. These values are the values we, as collectives, cherish and which shape our self-understanding and govern our interactions and innovation across different parts of the globe.