Engineering Member, Pre-training/Data
poolside
๐Remote - United States
Please let poolside know you found this job on JobsCollider. Thanks! ๐
Job highlights
Summary
Join poolside, a company building AI to drive economically valuable work and scientific progress, as a Data Scientist focused on improving the quality of datasets for training large language models (LLMs). You will be a hands-on contributor, leveraging your experience and conducting experiments to enhance pretraining datasets. This involves synthetic data generation, data mix optimization, and close collaboration with other teams. Staying current with the latest research is crucial, as you'll be conducting original research initiatives and deploying solutions in production. You'll work with massive datasets and a performant distributed data pipeline, contributing to the creation of high-quality datasets for poolside's models.
Requirements
- Strong machine learning and engineering background
- Experience with Large Language Models (LLM)
- Good knowledge of Transformers is a must
- Knowledge/Experience with cutting-edge training tricks
- Knowledge/Experience of distributed training
- Trained LLMs from scratch
- Knowledge of deep learning fundamentals
- Experience in building trillion-scale pretraining datasets, in particular
- Ingest, filter and deduplicate large amounts of web and code data
- Familiar with concepts making SOTA pretraining datasets: multi-linguality, curriculum learning, data augmentation, data packing, etc
- Run data ablations, tokenization and data-mixture experiments
- Develop prompt engineering pipelines to generate synthetic data at scale
- Fine-tuning small models for data filtering purposes
- Experience working with large-scale GPU clusters and distributed data pipelines
- Strong obsession with data quality
- Research experience
- Programming experience
- Strong algorithmic skills
- Linux
- Git, Docker, k8s, cloud managed services
- Data pipelines and queues
- Python with PyTorch or Jax
Responsibilities
- Follow the latest research related to LLMs and data quality in particular
- Be familiar with the most relevant open-source datasets and models
- Closely work with other teams such as Pretraining, Fine-tuning or Product to ensure short feedback loops on the quality of the models delivered
- Suggest, conduct and analyze data ablations or training experiments that aim to improve the quality of the datasets generated via quantitative insights
Preferred Qualifications
- Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc, is a nice to have
- Can freely discuss the latest papers and descend to fine details
- Is reasonably opinionated
- Prior experience in non-ML programming, especially not in Python
- C/C++, CUDA, Triton
Benefits
- Fully remote work & flexible hours
- 37 days/year of vacation & holidays
- Health insurance allowance for you and dependents
- Company-provided equipment
- Wellbeing, always-be-learning and home office allowances
- Frequent team get togethers
- Great diverse & inclusive people-first culture
Share this job:
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.