Senior Staff AI Engineer

ASAPP
Summary
Join ASAPP, a company driven by customer obsession and purposeful speed, as a Senior Staff AI Engineer. You will play a pivotal role in designing, building, and deploying cutting-edge AI systems for mission-critical enterprise applications. This role requires deep expertise in foundational models and enterprise AI systems, a passion for solving complex problems using modern machine learning techniques, and the ability to lead end-to-end AI initiatives. You will collaborate with engineering, research, and product teams, translating research into enterprise-grade ML systems. The ideal candidate thrives in ambiguity, is deeply technical, and possesses strong product sense. This is a journey, not just a job, offering opportunities for continuous learning and growth within a high-growth startup.
Requirements
- 8+ years of professional experience in Machine Learning, with 3+ years focused on LLMs or NLP at scale
- Proficiency on Python and ML frameworks like PyTorch or TensorFlow
- Proven experience working with foundational models (OpenAI, Bedrock, Anthropic, HuggingFace)
- Hands-on experience with fine-tuning LLMs
- Strong understanding of RAG pipelines, prompt engineering, and vector search
- Experience building AI systems using vector databases
- Deep experience with Python and cloud services. AWS is required; experience with GCP, Azure, or Snowflake is a plus
- Experience deploying and scaling AI systems using Docker, Kubernetes, and CI/CD practices
- Demonstrated ability to move fast: prototype, test, and iterate quickly
- Strong written and verbal communication skills; ability to communicate complex technical ideas to both technical and non-technical audiences
- Comfortable with ambiguity and proactive in identifying and solving high-impact problems
- Highly organized, with the ability to manage multiple priorities and projects in parallel
Responsibilities
- Drive the design and implementation of scalable ML/AI systems with a focus on large language models, vector databases, and retrieval-based architectures
- Leverage foundation models from major providers (OpenAI, AWS Bedrock, Anthropic, etc.) for both prototyping and production use
- Fine-tune, adapt, and evaluate LLMs for domain-specific enterprise applications
- Develop robust infrastructure for experimentation, training, deployment, and monitoring of AI models in production environments
- Optimize model performance and inference workflows for latency, cost, and accuracy
- Provide hands-on leadership and mentorship to other engineers, helping them grow technically and operationally
- Translate research ideas and early-stage prototypes into enterprise-grade ML systems
- Actively stay abreast of advancements in AI, open-source models, and MLOps best practices
- Establish and improve internal processes for experimentation, evaluation, and deployment of ML models
- Collaborate with cross-functional stakeholders to define AI roadmap and influence product direction
Preferred Qualifications
- Prior experience in MLOps, experimentation platforms, or model evaluation pipelines
- Contributions to open-source AI/ML tools
- Knowledge of additional languages such as SQL, Go, JavaScript, or TypeScript
- Graduate degree (MS or PhD) in Computer Science, Machine Learning, or related field
Benefits
$200,000 - $220,000 a year