The AIH Lab and Hong Kong Ethics Lab co-hosted "The Philosophy of AI: Themes from Seth Lazar" workshop at HKU on January 17.
Read MoreOn December 14 Seth Lazar gave a keynote talk to the NeurIPS workshop on Pluralistic Alignment.
Read MoreOn December 14 Seth Lazar delivered a keynote talk on evaluating the ethical competence of LLMs to the NeurIPS Algorithmic Fairness through the Lens of Metrics and Evaluation workshop
Read MoreOn December 9 Seth gave a talk entitled 'Evaluating LLM Ethical Competence' at the HKU workshop on Linguistic and Cognitive Capacities of LLMs.
Read MoreIn this paper, Alan Chan, Kevin Wei, Sihao Huang, Nitarshan Rajkumar, Elija Perrier, Seth Lazar, Gillian K. Hadfield, Markus Anderljung investigate the infrastructure we need to bring about the benefits and manage the risks of AI Agents.
Read MoreOn December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.
Read MoreSeth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Read MoreThis week Robert Long spoke to the lab about his newest paper with MINT lab affiliate Jacqueline Harding arguing that near-term AI systems may develop consciousness, requiring immediate attention to AI welfare considerations.
Read MoreIn this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Read MoreOn September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
Read MoreIn this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Read MoreIn this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
Read More