Scroll
Our Approach
Normative Philosophy of Computing
AI and related computational technologies are already embedded in almost every corner of our lives. Their harms are already becoming viscerally apparent. They demand answers, first, to the question of what, if anything, we should use these novel tools to do. But we must also ask who should decide the answer to that first question—and how the power that they exercise by way of AI should be constrained.
If our answer to these questions is not simply the abolitionist call to stop building AI at all, then we need to know just how its use can be justified. To do this, we need to develop a robust new subfield: the normative philosophy of computing. What’s more, the advent of AI in society raises first-order questions in normative philosophy which cannot simply be answered by applying an off-the-shelf solution. The search for low-hanging fruit invariably leads to critical errors and mistranslations. And much existing philosophical work on AI connects only coincidentally to actual AI practice, and rarely makes fundamental headway in philosophy itself.
There is an urgent need for empirically- and technically-grounded, philosophically ground-breaking work on the normative philosophy of computing. The MINT lab exists to fill that need—through its own work, and through fostering an international community of like-minded researchers. If you’re interested in learning more about this growing field, then reach out here, and join our mailing list here.
Sociotechnical AI Safety
AI Safety now plays a prominent role in both industry and government attempts to understand and mitigate the risks of advanced AI systems. Researchers have highlighted its constitutive shortcomings: that it is methodologically, demographically, ideologically and evaluatively too narrow for the task it has been set. But while many AI labs and national AI Safety Institutes acknowledge these critiques, they need better theoretical and empirical resources to address them.
Alongside our work in normative philosophy of computing, the MINT Lab is working on bringing a sociotechnical lens to AI safety—leading by example through theoretical and empirical work, but also developing a broader conceptual and practical toolkit for the field.
This is not about identifying a subset of harms/concerns as sociotechnical. Instead it's a lens one applies to the problems of AI safety in general: you can't adequately anticipate threats, weigh their likelihood, or intervene to mitigate them (whether ex ante or post hoc), unless you start from the premise that AI systems are inherently sociotechnical, ie that they are constituted not just by software + hardware but also by people, groups, institutions and structures (etc).
In other words, even narrowly technical interventions on AI systems should also be sociotechnical, in this sense: the intervention should ideally be guided by a broader understanding of the system that it is intervening in.
Research Themes
New Writing
Our Work
News
Featured
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
In this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
In this essay Seth develops a model of algorithmically-mediated social relations through the concept of the "Algorithmic City," examining how this new form of intermediary power challenges traditional theories in political philosophy.
In a new article in Inquiry, Vincent Zhang and Daniel Stoljar present an argument from rationality to show why AI systems like ChatGPT cannot think, based on the premise that genuine thinking requires rational responses to evidence.
In a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
In a forthcoming paper in Philosophy and Phenomenological Research, A.G. Holdier examines how certain types of silence can function as communicative acts that cause discursive harm, offering insights into the pragmatic topography of conversational silence in general.
In a new paper in Philosophical Studies MINT Lab affiliate David Thorstad critically examines the singularity hypothesis. Thorstad argues that this popular concept relies on insufficiently supported growth assumptions. The study explores the philosophical and policy implications of this critique, contributing to ongoing debates about the future trajectory of AI development.
MINT Lab affiliate David Thorstad examines the limits of longtermism in a forthcoming paper in the Australasian Journal of Philosophy. The study introduces "swamping axiological strong longtermism" and identifies factors that may restrict its applicability.
The Machine Intelligence and Normative Theory (MINT) Lab at the Australian National University has secured a USD 1 million grant from Templeton World Charity Foundation. This funding will support crucial research on Language Model Agents (LMAs) and their societal impact.
Prof. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
The Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.”
Events
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
The Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.”
This workshop aims to bring together the best philosophical work on normative questions raised by computing, and in addition to identify and connect early career scholars working on these questions. It will feature papers that use the tools of analytical philosophy to frame and address normative questions raised by computing and computational systems.
The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.
Michael Barnes presented at the Second Annual Penn-Georgetown Digital Ethics Workshop. The presentation (co-authored with Megan Hyska, Northwestern University) was titled “Interrogating Collective Authenticity as a Norm for Online Speech,” and it offers a critique of (relatively) new forms of content moderation on major social media platforms.
On 23 March 2024 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the American Philosophical Association's 2024 Pacific Division Meeting in Portland, Oregon.
How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
MINT is teaming up with the HUMANE.AI EU project, represented by PhD student Jonne Maas, to support a workshop on political philosophy and AI, to take place at Kioloa Coastal Campus in June 2024.
MINT is teaming up with colleagues in the US to edit a special section of the Journal of Responsible Computing on Barocas, Hardt and Narayanan’s book on Fair Machine Learning: Limitations and Opportunities.