Senior Data Engineer

NBCUniversal
Summary
Join NBCUniversal's global Operations & Technology organization as a Senior Data Engineer and contribute to the future of data and analytics strategies. Collaborate with business leaders, engineers, and product managers to understand data needs and build the next generation of data pipelines and applications. Design, build, and scale data pipelines across various source systems and streams. Implement process improvements, optimize data delivery, and redesign infrastructure for greater scalability. Work with internal stakeholders, data engineers, visualization experts, and data scientists. This fully remote position offers competitive compensation and benefits, including medical, dental, vision insurance, 401(k), paid leave, and tuition reimbursement.
Requirements
- 5+ years of experience in a data engineering or related role
- Direct experience designing and building data modeling, ETL/ELT development principles, or data warehousing concepts
- Strong knowledge of data management fundamentals and data storage principles
- Deep experience in building data pipelines using Python/SQL
- Deep experience in Airflow or similar orchestration engines
- Deep experience in applying CI/CD principles and processes to data engineering solutions
- Strong understanding of cloud data engineering design patterns and use cases
- Experience with Google Cloud Platform (GCP), such as Big Query, Composer, VertexAI, GCS, and other GCP resources
- Bachelor's degree in Computer Science, Data Science, Statistics, Informatics, Information Systems, Mathematics, Computer Engineering, or quantitative field
- Fully Remote: This position has been designated as fully remote, meaning that the position is expected to contribute from a non-NBCUniversal worksite, most commonly an employeeβs residence
Responsibilities
- Collaborate with business leaders, engineers, and product managers to understand data needs
- Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using cloud-native data engineering principles
- Design, build, and scale data pipelines across a variety of source systems and streams (internal, third-party, as well as cloud-based), distributed/elastic environments, and downstream applications and/or self-service solutions
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
- Implement the appropriate design patterns while optimizing performance, cost, security, and scale and end user experience
- Participate and lead in development sprints, demos, and retrospectives, as well as release and deployment
- Build and manage relationships with supporting IT teams in order to effectively deliver work products to production
Preferred Qualifications
- Knowledge of Medallion Architecture
- Experience using DBT
- Experience using Apache Iceberg to manage datasets
Benefits
This position is eligible for company sponsored benefits, including medical, dental and vision insurance, 401(k), paid leave, tuition reimbursement, and a variety of other discounts and perks