Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Read MoreWith former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
Read MoreSeth was invited on the Generally Intelligent podcast to discuss issues of power, legitimacy, and the political philosophy of AI.
Read MoreSeth has completed a book chapter forthcoming with MIT Press. The book is Collaborative Intelligence: How Humans and AI are Transforming our World, edited by Arathi Sethumadhavan and Mira Lane.
Read MoreSeth was invited to deliver the Scholl Lecture at Purdue University on 3 April. Seth’s presentation focused on how we should currently respond to the kind of catastrophic risks posed by AI systems, which often dominate contemporary discourse in the normative philosophy of computing.
Read MoreHow should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
Read MoreMINT is teaming up with the HUMANE.AI EU project, represented by PhD student Jonne Maas, to support a workshop on political philosophy and AI, to take place at Kioloa Coastal Campus in June 2024.
Read MoreMINT is teaming up with colleagues in the US to edit a special section of the Journal of Responsible Computing on Barocas, Hardt and Narayanan’s book on Fair Machine Learning: Limitations and Opportunities.
Read MoreTogether with Aaron Snoswell, Dylan Hadfield-Menell, and Daniel Kilov, Seth Lazar has been awarded USD50,000 to support work on developing a “moral conscience” for AI agents. The grant will start in April 2024, and run for 9-10 months.
Read MoreOur special issue of Philosophical Studies on Normative Theory and AI is now live. A couple more papers remain to come, but in the meantime you can find eight new papers on AI and normative theory here:
Read MoreSeth Lazar and lead author Nick Schuster published a paper on algorithmic recommendation in Philosophical Studies.
Read MoreSeth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
Read More