AI and Power
In November, MINT Lab Research Fellow Sean Donahue traveled to Sydney, Hong Kong, and Carnegie Mellon universities to present his research on platform legitimacy and digital governance.
This week at the MINT Lab Seminar, Jake Stone presented his research arguing that corporate involvement in open source AI isn't simply exploitative, but can create mutually beneficial partnerships when properly governed.
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
In this essay Seth develops a model of algorithmically-mediated social relations through the concept of the "Algorithmic City," examining how this new form of intermediary power challenges traditional theories in political philosophy.
Prof. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Ethics for AI Agents
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
On December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.
This week Robert Long spoke to the lab about his newest paper with MINT lab affiliate Jacqueline Harding arguing that near-term AI systems may develop consciousness, requiring immediate attention to AI welfare considerations.
In this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
On September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
In this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Moral Skill
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
In this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
On 23 March 2024 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the American Philosophical Association's 2024 Pacific Division Meeting in Portland, Oregon.
Our special issue of Philosophical Studies on Normative Theory and AI is now live. A couple more papers remain to come, but in the meantime you can find eight new papers on AI and normative theory here:
Seth Lazar and lead author Nick Schuster published a paper on algorithmic recommendation in Philosophical Studies.
On 18 September 2023 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the University of Leeds.
Sociotechnical AI Safety
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
This week, Harriet Farlow and Tania Sadhani presented their framework for analyzing AI incident likelihood. Developed through a collaboration between Mileva Security Labs, ANU MINT Lab, and UNSW, with funding from Foresight, their work aims to bridge short and long-term AI risks through practical quantification methods.
Seth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
On September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
In a new paper in Philosophical Studies MINT Lab affiliate David Thorstad critically examines the singularity hypothesis. Thorstad argues that this popular concept relies on insufficiently supported growth assumptions. The study explores the philosophical and policy implications of this critique, contributing to ongoing debates about the future trajectory of AI development.