Seth teams up with AI scientists to reply to the statement on the existential risk posed by AI for humans, suggesting that we can best avoid existential risk from AI by building robust research communities that work to mitigate better-understood risks from concrete AI systems.
Read MoreSeth Lazar invited to contribute to the ACOLA Rapid Research report on Large Language Models.
Read MoreSeth features on a new podcast episode about the risks of AI by Science Vs.
Read MoreSeth Lazar was awarded USD10,000 to support work on the societal impacts and social ontology of LLMs.
Read More