Policy
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
Seth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
Media
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
In a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Prof. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Michael Bennet has won the best student paper award at AGI 2023 for the second year running for his paper, "Emergent Causality and the Foundation of Consciousness," Read the full paper here.
In this piece for Tech Policy Press, Anton Leicht argues that future AI progress might not proceed linearly and we should prepare for potential plateaus and sudden leaps in capability. Leicht cautions against complacency during slowdowns and advocates for focusing on building capacities to navigate future uncertainty in AI development.
Seth presented a tutorial on the rise of Language Model Agents at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Events
In November, MINT Lab Research Fellow Sean Donahue traveled to Sydney, Hong Kong, and Carnegie Mellon universities to present his research on platform legitimacy and digital governance.
This week at the MINT Lab Seminar, Jake Stone presented his research arguing that corporate involvement in open source AI isn't simply exploitative, but can create mutually beneficial partnerships when properly governed.
On December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.
This week, Harriet Farlow and Tania Sadhani presented their framework for analyzing AI incident likelihood. Developed through a collaboration between Mileva Security Labs, ANU MINT Lab, and UNSW, with funding from Foresight, their work aims to bridge short and long-term AI risks through practical quantification methods.
This week Robert Long spoke to the lab about his newest paper with MINT lab affiliate Jacqueline Harding arguing that near-term AI systems may develop consciousness, requiring immediate attention to AI welfare considerations.
In this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Seth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
On September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
Resources
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
MINT Lab’s Seth Lazar and PhD student Jake Stone have published a new paper in Noûs on the site of predictive justice.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Seth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
Seth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.