AI Research Engineer

Tether.to Logo

Tether.to

πŸ“Remote - Worldwide

Summary

Join Tether's AI model team and drive innovation in supervised fine-tuning methodologies for advanced models. Refine pre-trained models to deliver enhanced intelligence and optimized performance for real-world challenges. Work on a wide spectrum of systems, from streamlined models for limited hardware to complex multi-modal architectures. You will leverage deep expertise in large language model architectures and fine-tuning optimization. Adopt a hands-on, research-driven approach to developing, testing, and implementing new techniques and algorithms. Responsibilities include curating specialized data, strengthening baseline performance, and resolving fine-tuning bottlenecks. The goal is to unlock superior domain-adapted AI performance.

Requirements

  • A degree in Computer Science or related field
  • Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences)
  • Hands-on experience with large-scale fine-tuning experiments, where your contributions have led to measurable improvements in domain-specific model performance
  • Deep understanding of advanced fine-tuning methodologies, including state-of-the-art modifications for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows
  • Strong expertise in PyTorch and Hugging Face libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms
  • Demonstrated ability to apply empirical research to overcome fine-tuning bottlenecks. You should be comfortable designing evaluation frameworks and iterating on algorithmic improvements to continuously push the boundaries of fine-tuned AI performance

Responsibilities

  • Develop and implement new state-of-the-art and novel fine-tuning methodologies for pre-trained models with clear performance targets
  • Build, run, and monitor controlled fine-tuning experiments while tracking key performance indicators. Document iterative results and compare against benchmark datasets
  • Identify and process high-quality datasets tailored to specific domains. Set measurable criteria to ensure that data curation positively impacts model performance in fine-tuning tasks
  • Systematically debug and optimize the fine-tuning process by analyzing computational and model performance metrics
  • Collaborate with cross-functional teams to deploy fine-tuned models into production pipelines. Define clear success metrics and ensure continuous monitoring for improvements and domain adaptation

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.