Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Read MoreOn December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.
Read MoreThis week Robert Long spoke to the lab about his newest paper with MINT lab affiliate Jacqueline Harding arguing that near-term AI systems may develop consciousness, requiring immediate attention to AI welfare considerations.
Read MoreIn this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Read MoreOn September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
Read MoreIn this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Read MoreIn this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
Read MoreIn this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
Read MoreIn a new article in Inquiry, Vincent Zhang and Daniel Stoljar present an argument from rationality to show why AI systems like ChatGPT cannot think, based on the premise that genuine thinking requires rational responses to evidence.
Read MoreIn a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Read MoreThe Machine Intelligence and Normative Theory (MINT) Lab at the Australian National University has secured a USD 1 million grant from Templeton World Charity Foundation. This funding will support crucial research on Language Model Agents (LMAs) and their societal impact.
Read MoreProf. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Read More