Conference presentation – EMERGE AI 11.12.2024

We had given a talk titled “Solving the Problem of Diagonalization” at an International Conference and Forum EMERGE 2024: Ethics of AI Alignment, which was hosted by The Digital Society Lab of the The Institute for Philosophy and Social Theory of the University of Belgrade. Additional information about the event can be found on the conferences’ webpage. You are invited to read our abstract in the text below, as well as others abstracts of the conference in the book of abstracts.

Solving the Problem of Diagonalization

The topic of AI alignment has recently risen in popularity due to the widespread availability of AI tools, such as LLM. However, not much attention has been given to theories of machine motivation that would seek to understand the formation of an AIs’ goals and values. The paper aims to addressl this gap by contrasting Nick Bostrom’s orthogonal theory and Nick Land’s diagonal theory. Bostrom’s orthogonal theory, implicitly assumed to be correct in most AI alignment discussions, proposes that volitional structure (motivation) and cognitive capacity (intelligence) are independent. In contrast, Land’s diagonal theory challenges this premise, arguing that goal complexity rises as intelligence does, ultimately leading to intelligence increase becoming the singular goal. As a result, alignment with human ethical values may be nearly unattainable.
The behaviour of natural intelligences indicates that Land’s diagonal theory may hold more validity, as human goals are much more complex and varied than those of less intelligent animals. This paper will delve deeper into this issue and present additional arguments supporting the diagonal theory’s validity. The paper will also explore whether aligning AI with human ethical values is feasible if the diagonal theory of intelligence proves correct. While this question is very theoretical in nature and its effects may not be immediately apparent, we will demonstrate its relevance in the current socio-political climate. In particular, we will be focusing on its importance for freedom of expression and the future of democracy, by highlighting the effect of AI on media centralization, the importance of data and algorithm transparency, and the threat posed by unaligned AI(s) in a multipolar world. AI technology has the potential to revolutionize social and ethical norms, its development and use must be carefully guided to safeguard these pillars of democracy and ensure its continued flourishing.

Keywords: AI alignment; orthogonality thesis; diagonal thesis; democracy; media transparency;
Objavljeno v Nekategorizirano | Komentiraj

Deeds and Days publication

We are pleased to announce the publication of the first DISRUP paper. Our work, titled “The Basilisk and the Zombie: Exploring the Future of Life with AI through the Medium of Popular Culture,” has been published in the journal Deeds and Days, volume 70, within the section dedicated to technological transformations. You can access the publication here, while you can find the abstract to the research paper below:

The Basilisk and the Zombie: Exploring the Future of Life with AI through the Medium of Popular Culture

We are living in an era of technological transformation, characterized by a qualitative acceleration of time. Rather than time literally moving faster, it is seemingly becoming denser, characterized by an ever-closer temporal clustering of noteworthy events. As Nick Land (2011) put it, “the current time is a period of transition, with a distinctive quality, characterizing the end of an epoch. Something – some age – is coming quite rapidly to an end.” The catalyst of this transition – technology – is the most likely candidate for the essential feature of the coming epoch. We explore various visions of technological society, found in our pop culture as well as certain scholarly works, with a particular focus on two main motifs that seem to reflect an unconscious apprehension at the inevitability of the technological transformation of society. To this end, we will attempt to explore and interpret the commonly recurring motifs of unfriendly AI usurping humanity as the “apex of existence,” which we designated as “the Basilisk,” and the reduction of humanity to automata, dubbed “the Zombie.” We use these motifs and their portrayals as a vehicle for the exploration of the future consequences of widespread AI technology and our society’s attitudes toward them.
Objavljeno v Nekategorizirano | Komentiraj

Conference presentation – Cognitive science 12.10.2023

We participated in this year’s AI themed Cognitive Science conference, which is a part of Information Society multi-conference, hosted by the Jožef Stefan Institute.
We presented two papers on the topic of AI alignment problem. You can access the proceedings here and read the abstracts of both articles below:

Orthogonalist and anti-orthogonalist perspective on AI alignment problem

Humanity has again found itself on the brink of a new era. Akin to the revolutionizing influence of previous technological innovations such as the steam machine, the printing press, computers and the internet, large language models (LLM) seem poised to bring about important social changes. As these models advance in sophistication and complexity, the issue of AI alignment is gaining prominence as a crucial policy issue as well as a daily conversation topic. This research explores two contrasting viewpoints on the AL alignment problem: the Orthogonalist perspective pioneered by Nick Bostrom and the Anti-Orthogonalist critique formulated by Nick Land. The former posits that an AI’s goals are independent of its intelligence, suggesting that a "friendly AI" (fully aligned to human values) is possible. The latter challenges its separation of intelligence and volition from the perspective that intelligence increase leads to a greater ability for self-reflection, ultimately leading to a restructuring of its volitional structure to prioritize further cognitive enhancement. We explore the anti-orthogonalist position in more detail, highlighting Land’s "instrumental reduction" of drives, demonstrating how every imperative is ultimately dependent on the Will-to- Think. We then discuss the implications of this position for the idea of "friendly AI", the role of AI in society and the future of AI research.

Social Volition as Artificial Intelligence: Science and Ideology as
Landian Intelligences

This paper explores the equating of capitalism and artificial intelligence in the neo-cybernetic philosophy of Nick Land in order to reveal its underlying premises. The latter are then used to construct an explanatory framework for the analysis of macro-scale human social behavior, specifically collectives of agents united by a common goal - institutions. Institutions are conceptualized as distributed intelligences, consisting of a substrate and an organizing principle - a market (collective of agents) and a vector (an incentive structure geared toward optimizing for a particular goal). This framework is used to draw an analogy from the distinction between a free-market economy and a centrally planned one to the distinction between science and ideology, ultimately concluding that any top-down political or ideological interference in the operating mechanism of science removes the very element that makes the latter “scientific”. There is thus, strictly speaking, no such thing as politicized or ideological science, but rather science and not science.
Objavljeno v Conference | Označeno , , , | Komentiraj