Future of Life Institute is hiring a
AI Risk Management Researcher in United States

Logo of Future of Life Institute
AI Risk Management Researcher
🏢 Future of Life Institute
💵 $150k-$180k
📍United States
📅 Posted on Jun 13, 2024

Summary

The job involves leading research on key gaps in global risk management of advanced AI systems for CARMA, a nonprofit organization focused on managing risks from transformative AI. The role requires a M.Sc. or higher, or B.Sc. plus 6 years of experience, in a relevant field, and skills in model development, technical writing, and working on multiple tasks simultaneously.

Requirements

  • A M.Sc. or higher, or a B.Sc. plus 6 years of experience, in either Computer Science, Security Studies, Risk Management, AI Policy, Cybersecurity or a closely related field
  • A demonstrated focus on any one or more of: machine learning, AI, safety engineering, AI policy, complex systems, operations research, or operational risk management, or other relevant domains
  • Experience in any two or more of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Catastrophe risk management, Operations research, Industrial engineering, Futures studies, Foresight methods, Leading labs, Ontologies and knowledgebases, Incentive studies, Criminal psychology, or Technical standards development
  • Relevant publications
  • Skilled in developing informal, semi-formal, and formal models
  • Skilled in technical and expository writing
  • Experience working on, tracking, and successfully completing multiple concurrent tasks to meet deadlines with little supervision

Responsibilities

  • Apply a security mindset to analyzing prospective AI proliferation and usage dynamics
  • Research aspects of AI R&D processes, AI safety, and prospective AGI capabilities
  • Research and develop impact assessments related to global security, national security, public health, wellbeing, and other topics related to AI risk management
  • Write papers explaining groundings and processes for developing and implementing particular types of risk management analyses
  • Draft methodologies for AI developers and auditors, appropriate as contributions to AI standards working groups
  • Focus on either a) comprehensive risk quantification of systems/models/datasets or b) questions of offense-defense balances of capabilities from systems/models/datasets

Preferred Qualifications

  • Fluidity in using the modern cognitive AI stack of LLMs, prompt engineering, and scaffolds
  • Experience with scalable graph models
  • Experience in probabilistic programming environments

Benefits

  • $150,000 - $180,000 a year
  • Salary plus good benefits
Help us out by mentioning to Future of Life Institute that you discovered this job opportunity on JobsCollider. Your support is greatly appreciated. Thank you 🙏
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Jobs