MINT Background colour.png

PEOPLE

MINT is led by Seth Lazar, Professor of Philosophy at the ANU and Distinguished Research Fellow at the University of Oxford. The team includes post-docs, PhD students, honours students, and affiliates, working across the moral and political philosophy of data and AI, and sociotechnical AI safety.

RESEARCH

MINT aims to make first order progress in the normative philosophy of computing and sociotechnical AI safety. Our projects range from moral psychology and moral epistemology as applied to LLMs, through the justification of political authority, to developing LLM evaluations and building AI agents.

ENGAGEMENT

MINT works closely with partners in industry and government, and around the world, to place empirically- and technically-grounded philosophy at the heart of AI Ethics and Safety.

JOIN MINT

MINT is a growing team, and we’re always interested in hearing from prospective PhD and honours students to work on MINTy projects, as well as potential new affiliates. Express your interest in participating here.

 
 
MINT Background Happy.jpg
 

Our Approach

 

Normative Philosophy of Computing

AI and related computational technologies are already embedded in almost every corner of our lives. Their harms are already becoming viscerally apparent. They demand answers, first, to the question of what, if anything, we should use these novel tools to do. But we must also ask who should decide the answer to that first question—and how the power that they exercise by way of AI should be constrained.

If our answer to these questions is not simply the abolitionist call to stop building AI at all, then we need to know just how its use can be justified. To do this, we need to develop a robust new subfield: the normative philosophy of computing. What’s more, the advent of AI in society raises first-order questions in normative philosophy which cannot simply be answered by applying an off-the-shelf solution. The search for low-hanging fruit invariably leads to critical errors and mistranslations. And much existing philosophical work on AI connects only coincidentally to actual AI practice, and rarely makes fundamental headway in philosophy itself.

There is an urgent need for empirically- and technically-grounded, philosophically ground-breaking work on the normative philosophy of computing. The MINT lab exists to fill that need—through its own work, and through fostering an international community of like-minded researchers. If you’re interested in learning more about this growing field, then reach out here, and join our mailing list here.

Sociotechnical AI Safety

AI Safety now plays a prominent role in both industry and government attempts to understand and mitigate the risks of advanced AI systems. Researchers have highlighted its constitutive shortcomings: that it is methodologically, demographically, ideologically and evaluatively too narrow for the task it has been set. But while many AI labs and national AI Safety Institutes acknowledge these critiques, they need better theoretical and empirical resources to address them.

Alongside our work in normative philosophy of computing, the MINT Lab is working on bringing a sociotechnical lens to AI safety—leading by example through theoretical and empirical work, but also developing a broader conceptual and practical toolkit for the field.

This is not about identifying a subset of harms/concerns as sociotechnical. Instead it's a lens one applies to the problems of AI safety in general: you can't adequately anticipate threats, weigh their likelihood, or intervene to mitigate them (whether ex ante or post hoc), unless you start from the premise that AI systems are inherently sociotechnical, ie that they are constituted not just by software + hardware but also by people, groups, institutions and structures (etc).

In other words, even narrowly technical interventions on AI systems should also be sociotechnical, in this sense: the intervention should ideally be guided by a broader understanding of the system that it is intervening in.


Research Themes

AI and Power

Ethics for AI Agents

Moral Skill

Sociotechnical AI Safety

 

 
 

New Writing

 
 
 
MINT Background iPhone.jpg

 News


MINT Background Mech.jpg