Future of Life Institute is hiring a
AI Risk Management Researcher in United States

Logo of Future of Life Institute
AI Risk Management Researcher
🏢 Future of Life Institute
💵 $150k-$180k
📍United States
📅 Posted on Jun 1, 2024

Summary

The job is a research position at CARMA focused on analyzing the offense/defense balances of advanced AI systems. The role involves developing and critiquing analytical methodologies, writing papers, and drafting guidance for AI developers and auditors. The organization aims to lower risks to humanity from transformative AI by providing critical support to society.

Requirements

  • A M.Sc. or higher, or a B.Sc. plus 4 years of experience, in either Computer Science, Security Studies, Risk Management, AI Policy, Cybersecurity or a closely related field
  • A demonstrated focus on any one or more of: machine learning, AI, safety engineering, AI policy, complex systems, operations research, or operational risk management, or other relevant domains
  • Experience in any of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Catastrophe risk management, Operations research, Industrial engineering, Futures studies, Foresight methods, Leading labs, Ontologies and knowledgebases, Incentive studies, Criminal psychology, or Technical standards development
  • Skilled in developing informal, semi-formal, and formal models
  • Relevant publications
  • Skilled in technical and expository writing

Responsibilities

  • Apply a security mindset to analyzing prospective AI proliferation and usage dynamics
  • Combine methods from safety engineering, operational risk management, cybersecurity, and other disciplines to develop analyses and methodologies to model and weigh the offense/defense balances of differentially introduced capabilities
  • Research and develop impact assessments related to global security, national security, public health, wellbeing, and other topics related to AI risk management
  • Develop methods of aggregating modeled effects for AI models, AI systems, datasets, services, and other aggregate artifacts
  • Write papers explaining groundings for the risk estimation methods you will develop
  • Draft guidance for AI developers and auditors, appropriate as contributions to AI standards working groups

Preferred Qualifications

  • Fluidity in using the modern cognitive AI stack of LLMs, prompt engineering, and scaffolds
  • Experience with scalable graph models
  • Experience in probabilistic programming environments

Benefits

$150,000 - $180,000 a year Salary plus good benefits

Help us out by mentioning to Future of Life Institute that you discovered this job opportunity on JobsCollider. Your support is greatly appreciated. Thank you 🙏
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Jobs