AI Safety Investigator

Future of Life Institute
Summary
Join the Future of Life Institute (FLI) as an AI Safety Investigator to leverage investigative journalism skills for a high-impact role. You will document safety practices at leading AI corporations, explain incidents and best practices to the public, and help FLI incentivize improved safety measures. This position involves building and maintaining FLI's semiannual AI Safety Index and conducting in-depth investigations into corporate AI safety practices. You will report to FLI's Head of EU Policy and Research and collaborate closely with President Max Tegmark. The role requires strong communication, research, and analytical skills to effectively communicate complex information to diverse audiences. Compensation includes a competitive salary and benefits package.
Requirements
- Willingness to travel to the SF Bay Area, California on occasion to follow leads and build relationships
- Fluency in English, both written and oral
- Self directed, a desire to work independently with minimal supervision
- An instinct for identifying and following leads, and good judgement to understand the appropriate next steps to take
- High attention to detail and commitment to verifying the truth
- Ability to communicate technical concepts in clear and engaging ways
- Capacity to rapidly assess AI safety incidents and develop infographics or media briefings
Responsibilities
- Investigate corporate practices
- Build a network of key current and former employees at the largest corporations to understand current policies and approaches
- Conduct desk research and survey major AI corporations
- As AI incidents occur, quickly prepare a summary of available public and private information on what went wrong and how such incidents could be prevented in future
- Lead the development of the AI Safety Index
- Take ownership of one of FLI's flagship projects, conducting research against indicators of safety to score and rank AI corporations
- Find ways for the Index to improve, become more robust, relevant, accessible, and accurate
- Communicate insights to the public
- Compress complex information into concise, structured formats suitable for index metrics
- Help create attention-grabbing yet informative data visualisations and written media that effectively communicate AI incidents
- Work with internal and external communication partners to find ways to amplify the key findings to larger or new audiences
Preferred Qualifications
- A background in journalism, research or conducting investigations
- Strong understanding of AI capabilities, technical safety research, and current risk management approaches (e.g. safety frameworks)
- Existing network within the AI safety or AI development space
Benefits
- Health insurance
- 24+ days of PTO per year
- Paid parental leave
- 401k matching in the US
- A work from home allowance for the purchase of office supplies or equipment