AI Research Engineer

Tether.to
Summary
Join Tether's AI model team and drive innovation in supervised fine-tuning methodologies for advanced models. Refine pre-trained models to deliver enhanced intelligence and optimized performance for real-world challenges. Work on various systems, from resource-efficient models to complex multi-modal architectures. Contribute to developing, testing, and implementing new fine-tuning techniques and algorithms. Responsibilities include curating specialized data, strengthening baseline performance, and resolving bottlenecks in the fine-tuning process. The goal is to unlock superior domain-adapted AI performance and push the limits of what these models can achieve. Tether offers a global, remote work environment and the opportunity to collaborate with leading minds in the fintech industry.
Requirements
- A degree in Computer Science or related field
- Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences)
- Hands-on experience with large-scale fine-tuning experiments, where your contributions have led to measurable improvements in domain-specific model performance
- Deep understanding of advanced fine-tuning methodologies, including state-of-the-art modifications for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows
- Strong expertise in PyTorch and Hugging Face libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms
- Demonstrated ability to apply empirical research to overcome fine-tuning bottlenecks. You should be comfortable designing evaluation frameworks and iterating on algorithmic improvements to continuously push the boundaries of fine-tuned AI performance
Responsibilities
- Develop and implement new state-of-the-art and novel fine-tuning methodologies for pre-trained models with clear performance targets
- Build, run, and monitor controlled fine-tuning experiments while tracking key performance indicators. Document iterative results and compare against benchmark datasets
- Identify and process high-quality datasets tailored to specific domains. Set measurable criteria to ensure that data curation positively impacts model performance in fine-tuning tasks
- Systematically debug and optimize the fine-tuning process by analyzing computational and model performance metrics
- Collaborate with cross-functional teams to deploy fine-tuned models into production pipelines. Define clear success metrics and ensure continuous monitoring for improvements and domain adaptation