Scroll
Our Approach
Normative Philosophy of Computing
AI and related computational technologies are already embedded in almost every corner of our lives. Their harms are already becoming viscerally apparent. They demand answers, first, to the question of what, if anything, we should use these novel tools to do. But we must also ask who should decide the answer to that first question—and how the power that they exercise by way of AI should be constrained.
If our answer to these questions is not simply the abolitionist call to stop building AI at all, then we need to know just how its use can be justified. To do this, we need to develop a robust new subfield: the normative philosophy of computing. What’s more, the advent of AI in society raises first-order questions in normative philosophy which cannot simply be answered by applying an off-the-shelf solution. The search for low-hanging fruit invariably leads to critical errors and mistranslations. And much existing philosophical work on AI connects only coincidentally to actual AI practice, and rarely makes fundamental headway in philosophy itself.
There is an urgent need for empirically- and technically-grounded, philosophically ground-breaking work on the normative philosophy of computing. The MINT lab exists to fill that need—through its own work, and through fostering an international community of like-minded researchers. If you’re interested in learning more about this growing field, then reach out here, and join our mailing list here.
Sociotechnical AI Safety
AI Safety now plays a prominent role in both industry and government attempts to understand and mitigate the risks of advanced AI systems. Researchers have highlighted its constitutive shortcomings: that it is methodologically, demographically, ideologically and evaluatively too narrow for the task it has been set. But while many AI labs and national AI Safety Institutes acknowledge these critiques, they need better theoretical and empirical resources to address them.
Alongside our work in normative philosophy of computing, the MINT Lab is working on bringing a sociotechnical lens to AI safety—leading by example through theoretical and empirical work, but also developing a broader conceptual and practical toolkit for the field.
This is not about identifying a subset of harms/concerns as sociotechnical. Instead it's a lens one applies to the problems of AI safety in general: you can't adequately anticipate threats, weigh their likelihood, or intervene to mitigate them (whether ex ante or post hoc), unless you start from the premise that AI systems are inherently sociotechnical, ie that they are constituted not just by software + hardware but also by people, groups, institutions and structures (etc).
In other words, even narrowly technical interventions on AI systems should also be sociotechnical, in this sense: the intervention should ideally be guided by a broader understanding of the system that it is intervening in.
Research Themes
New Writing
Our Work
News
Featured
In a forthcoming essay in Philosophy & Public Affairs, based on his 2023 Stanford Tanner Lectures, Seth develops a model of algorithmically-mediated social relations through the concept of the "Algorithmic City," examining how this new form of intermediary power challenges traditional theories in political philosophy.
The AIH Lab and Hong Kong Ethics Lab co-hosted "The Philosophy of AI: Themes from Seth Lazar" workshop at HKU on January 17.
On December 14 Seth Lazar gave a keynote talk to the NeurIPS workshop on Pluralistic Alignment.
Seth has been invited to give first Annual Arthur & Barbara Gianelli Lecture on The Philosophy of Science at St John’s University, in April 2025.
On December 14 Seth Lazar delivered a keynote talk on evaluating the ethical competence of LLMs to the NeurIPS Algorithmic Fairness through the Lens of Metrics and Evaluation workshop
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
On December 9 Seth gave a talk entitled 'Evaluating LLM Ethical Competence' at the HKU workshop on Linguistic and Cognitive Capacities of LLMs.
From December 1, 2024, to February 7, 2025 Seth will be undertaking a visting fellowship with the University of Hong Kong.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
The UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
In this paper, Alan Chan, Kevin Wei, Sihao Huang, Nitarshan Rajkumar, Elija Perrier, Seth Lazar, Gillian K. Hadfield, Markus Anderljung investigate the infrastructure we need to bring about the benefits and manage the risks of AI Agents.
In a forthcoming paper in the ACM Journal on Responsible Computing Jake Stone and Brent Mittelstadt consider how we ought to legitimate automated decision making.
In a forthcoming paper in AI & Society Sean Donahue argues that while common objections to epistocracy may not apply to AI governance, epistocracy remains fundamentally flawed.
On December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.
In November, MINT Lab Research Fellow Sean Donahue traveled to Sydney, Hong Kong, and Carnegie Mellon universities to present his research on platform legitimacy and digital governance.
Events
The AIH Lab and Hong Kong Ethics Lab co-hosted "The Philosophy of AI: Themes from Seth Lazar" workshop at HKU on January 17.
Seth has been invited to give first Annual Arthur & Barbara Gianelli Lecture on The Philosophy of Science at St John’s University, in April 2025.
On December 14 Seth Lazar delivered a keynote talk on evaluating the ethical competence of LLMs to the NeurIPS Algorithmic Fairness through the Lens of Metrics and Evaluation workshop
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
On December 9 Seth gave a talk entitled 'Evaluating LLM Ethical Competence' at the HKU workshop on Linguistic and Cognitive Capacities of LLMs.
From December 1, 2024, to February 7, 2025 Seth will be undertaking a visting fellowship with the University of Hong Kong.
On December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.
In November, MINT Lab Research Fellow Sean Donahue traveled to Sydney, Hong Kong, and Carnegie Mellon universities to present his research on platform legitimacy and digital governance.
This week at the MINT Lab Seminar, Jake Stone presented his research arguing that corporate involvement in open source AI isn't simply exploitative, but can create mutually beneficial partnerships when properly governed.