πWorldwide
Lead Data Engineer

Brillio
πRemote - India
Please let Brillio know you found this job on JobsCollider. Thanks! π
Summary
Join our team as a Lead Data Engineer and lead cloud modernization initiatives, develop scalable data pipelines, and enable real-time data processing. You will drive the transformation of legacy infrastructure into a robust, cloud-native data ecosystem. This high-impact role requires expertise in Google Cloud Platform (GCP) and BigQuery. Key responsibilities include data migration, integration, ETL/data pipeline development, and query optimization. We offer competitive compensation and benefits, remote flexibility, and growth opportunities. The ideal candidate will have 5+ years of data engineering experience with a strong focus on cloud and big data technologies and proven experience migrating on-premise data systems to the cloud.
Requirements
- 5+ years in Data Engineering with a strong focus on cloud and big data technologies
- Minimum 2+ years of hands-on experience with GCP, specifically BigQuery
- Proven experience migrating on-premise data systems to the cloud
- Strong development experience with Apache Airflow, Python, and Apache Spark
- Expertise in streaming data ingestion, particularly in IoT or sensor data environments
- Strong SQL development skills; experience with BigQuery performance tuning
- Solid understanding of cloud architecture, data modeling, and data warehouse design
- Familiarity with Git and CI/CD practices for managing data pipelines
Responsibilities
- Analyze legacy on-premises and hybrid cloud data warehouse environments (e.g., SQL Server)
- Lead the migration of large-scale datasets to Google BigQuery
- Design and implement data migration strategies ensuring data quality, integrity, and performance
- Integrate data from various structured and unstructured sources, including APIs, relational databases, and IoT devices
- Build real-time streaming pipelines for large-scale ingestion and processing of IoT and telemetry data
- Modernize and refactor legacy SSIS packages into cloud-native ETL pipelines
- Develop scalable, reliable workflows using Apache Airflow, Python, Spark, and GCP-native tools
- Ensure high-performance data transformation and loading into BigQuery for analytical use cases
- Write and optimize complex SQL queries, stored procedures, and scheduled jobs within BigQuery
- Develop modular, reusable transformation scripts using Python, Java, Spark, and SQL
- Continuously monitor and optimize query performance and cost efficiency in the cloud data environment
Preferred Qualifications
- GCP Professional Data Engineer certification
- Experience with modern data stack tools like dbt, Kafka, or Terraform
- Exposure to ML pipelines, analytics engineering, or DataOps/DevOps methodologies
Benefits
- Competitive compensation and benefits
- Remote flexibility and growth opportunities
Share this job:
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Similar Remote Jobs

πIndia
πUnited States
πBrazil
πUnited States
π°$215k-$265k
πUnited States
πCosta Rica

πUnited States, United Kingdom
πUnited States