Lauren Wagner
@typewriters
Blown away by this paper - it speaks to everything I’ve been working towards the past few years building the verification and trust infrastructure to accelerate trustworthy AI, including @arcprize Safe AI is verifiable AI, with insurable deployments so that risk is absorbed by the right actors, not society as a whole Verification hits at so many parts of the stack - in training, or evals, or guiding frontier research. In the words of the author, “Verification is not a compliance function. It is a primary production technology — and increasingly the most defensible moat.” As an investor, I’ve targeted startups building this trust infra but my hope is founders realize this is a function that needs to be built inside their companies too - a holistic function leads to more durable enterprise deployments and policy engagement that results in positive outcomes for companies and society Safe AI is achievable, but many of the ideas for getting there are too simplistic. I saw a post on LessWrong from 2022 discussing the best way to contain AI AWDs - the authors advocated for global agreements halting their use They assume some level of centralized power and flatten complex ecosystems to something subject to an on/off switch. I’ve seen firsthand from years in global tech - it doesn’t work? This paper looks at a complicated ecosystem and a complicated technology and creates an actual playbook for change