Principal Machine Learning Engineer

Algolia
Summary
Join Algolia's AI Search & Discovery Group as a Principal Machine Learning Engineer and play a critical role in scaling the ML foundations powering Algolia's search, ranking, recommendations, and generative AI. This hands-on role involves designing and implementing scalable ML systems, collaborating across teams to improve model quality and reliability, and driving the evolution of shared infrastructure. You will guide architectural decisions, act as a connector across teams, and mentor engineers. The ideal candidate possesses extensive experience in applied machine learning and deploying ML systems in production, excels in technical problem-solving and collaboration, and is deeply curious and motivated by impact. Algolia offers a flexible workplace model with options for remote or hybrid work.
Requirements
- Have a strong engineering background with extensive experience in applied machine learning or deploying ML systems in production
- Have led the design and deployment of models or pipelines in one or more of: search, recommendation, personalization, NLP, or GenAI—and can adapt across domains as needed
- Love balancing hands-on technical problem solving with empowering others to do their best work
- Communicate clearly, work collaboratively, and know how to navigate ambiguity in fast-paced environments
- Write clean, production-grade code in Python (and ideally Go), and care about correctness, scalability, and developer experience
- Are deeply curious and motivated by impact—both through systems you build and people you support
Responsibilities
- Design and implement scalable, reusable, and high-performance ML systems that span multiple areas of AI Search & Discovery
- Collaborate across teams to accelerate experimentation, streamline deployment, and improve the quality and reliability of models in production
- Drive the evolution of shared infrastructure, tools, and components—such as embedding services, inference pipelines, and evaluation frameworks
- Guide architectural decisions that balance long-term flexibility with near-term impact
- Act as a connector across domains and teams, identifying opportunities for reuse and alignment
- Mentor engineers with empathy—bringing clarity, unblocking complexity, and helping others grow in their own areas of excellence
Preferred Qualifications
- Experience with real-time ranking systems, retrieval-augmented generation, or reinforcement learning for recommendations
- Familiarity with modern ML Ops stacks, including feature stores, offline/online evaluation tooling, and distributed inference
- Experience in cross-team initiatives or open-source contributions that required both technical depth and organizational alignment
Benefits
Flexible workplace model with options for remote or hybrid work