We participated in this year’s AI themed Cognitive Science conference, which is a part of Information Society multi-conference, hosted by the Jožef Stefan Institute.
We presented two papers on the topic of AI alignment problem. You can access the proceedings here and read the abstracts of both articles below:
Orthogonalist and anti-orthogonalist perspective on AI alignment problem
Humanity has again found itself on the brink of a new era. Akin to the revolutionizing influence of previous technological innovations such as the steam machine, the printing press, computers and the internet, large language models (LLM) seem poised to bring about important social changes. As these models advance in sophistication and complexity, the issue of AI alignment is gaining prominence as a crucial policy issue as well as a daily conversation topic. This research explores two contrasting viewpoints on the AL alignment problem: the Orthogonalist perspective pioneered by Nick Bostrom and the Anti-Orthogonalist critique formulated by Nick Land. The former posits that an AI’s goals are independent of its intelligence, suggesting that a "friendly AI" (fully aligned to human values) is possible. The latter challenges its separation of intelligence and volition from the perspective that intelligence increase leads to a greater ability for self-reflection, ultimately leading to a restructuring of its volitional structure to prioritize further cognitive enhancement. We explore the anti-orthogonalist position in more detail, highlighting Land’s "instrumental reduction" of drives, demonstrating how every imperative is ultimately dependent on the Will-to- Think. We then discuss the implications of this position for the idea of "friendly AI", the role of AI in society and the future of AI research.
Social Volition as Artificial Intelligence: Science and Ideology as
Landian IntelligencesThis paper explores the equating of capitalism and artificial intelligence in the neo-cybernetic philosophy of Nick Land in order to reveal its underlying premises. The latter are then used to construct an explanatory framework for the analysis of macro-scale human social behavior, specifically collectives of agents united by a common goal - institutions. Institutions are conceptualized as distributed intelligences, consisting of a substrate and an organizing principle - a market (collective of agents) and a vector (an incentive structure geared toward optimizing for a particular goal). This framework is used to draw an analogy from the distinction between a free-market economy and a centrally planned one to the distinction between science and ideology, ultimately concluding that any top-down political or ideological interference in the operating mechanism of science removes the very element that makes the latter “scientific”. There is thus, strictly speaking, no such thing as politicized or ideological science, but rather science and not science.