Lead Data Architect

Nerdery
Summary
Join Nerdery as a Lead Data Architect (Principal) and become a key player in designing and delivering scalable, cloud-native data solutions on Google Cloud Platform (GCP). You will lead technical teams, mentor engineers, and drive data strategy and governance. This customer-facing role requires extensive experience in data architecture, cloud data engineering, and analytics solutions on GCP. You will be responsible for architecting data pipelines, implementing data governance frameworks, and engaging with clients to translate business requirements into technical solutions. As a member of the Technical Tiger Team, you will contribute to innovation and prototyping cutting-edge GCP solutions. This position offers opportunities for technical leadership, mentorship, and collaboration within a dynamic and supportive environment.
Requirements
- 12+ years of experience in data architecture, cloud data engineering, and analytics solutions
- 7+ years of experience designing and implementing large-scale data solutions on Google Cloud Platform (GCP)
- Proven ability to lead technical teams, mentor engineers, and drive data strategy & governance
- Strong experience in defining conceptual, logical, and physical data models. Experience with different data warehousing and data lake architectures
- Extensive experience in ETL/ELT, real-time data streaming, and big data processing frameworks
- Strong experience in pre-sales support, client engagement, and solution architecture for data platforms
- Proven ability to engage with clients, understand their business requirements, and translate them into technical solutions
- Experience managing data-related projects from inception to completion
- Deep knowledge of data security, compliance (GDPR, HIPAA, etc.), and performance optimization
- Excellent communication skills, with the ability to articulate complex data concepts to both technical and non-technical stakeholders
- Core GCP Data Services: Data Warehousing & Databases: BigQuery, Cloud SQL, Spanner, AlloyDB, Memorystore
- Data Integration & ETL: Cloud Dataflow (Apache Beam), Cloud Composer (Apache Airflow), Datastream (CDC)
- Analytics: BigQuery, CloudSQL: , Looker & Looker Studio, Connected Sheets, Pub/Sub
- Data Engineering Fundamentals: ETL/ELT pipelines, real-time data streaming, big data processing frameworks
- Data modeling (conceptual, logical, physical), data governance, security, and compliance
- Data warehousing knowledge and good with SQL skills
- Working experience on data migration from On-premise to Cloud
- Experience in Data Migration from On-premises Database to Big query and experience in BQ conversion
- Experience and knowledge in building data pipelines and scheduling using Cloud Composer (Airflow) and data and file transformation using Python
- EDW (Enterprise Data Warehouse) and Data Model Designing
- Experience with Data modelling, Data warehousing and ETL processes
Responsibilities
- Design and oversee scalable, high-performance cloud data solutions on GCP, including BigQuery, Vertex, Dataflow, Cloud SQL, Spanner, Bigtable, Firestore, AlloyDB, and more
- Develop multi-phased cloud data strategies and implementation roadmaps tailored to client needs
- Architect end-to-end data pipelines for structured, semi-structured, and unstructured data
- Lead data modeling efforts, including conceptual, logical, and physical data models
- Define and implement robust data governance, security, and compliance frameworks
- Conduct in-depth research and analysis to recommend optimal technical approaches
- Engage with executive stakeholders and technical teams to translate business requirements into scalable GCP solutions
- Gather technical requirements, assess client capabilities, and design cloud adoption strategies
- Develop and deliver compelling POCs and prototypes showcasing the value of BigQuery, Cloud Composer, Dataflow, and other GCP Data technologies
- Lead collaborative workshops and project meetings to ensure successful solution deployment
- Design and implement cost effective and secure end to end cloud analytics solutions
- Act as the technical authority for all data-related decisions across engineering, presales, and delivery teams
- Provide hands-on mentorship to data engineers and solutions architects
- Establish and enforce data standards, best practices, and frameworks
- Foster a culture of innovation and continuous learning
- Provide strategic guidance on cloud data adoption, modernization, and migration
- GCP Data engineering services - BigQuery , Airflow/Cloud Composer, Dataflow
- Programming Language - Responsible for both hands on development of Python based ETL pipelines as well PRs and setting/maintaining high code standards within the team
- Be a core member of Nerderyβs Technical Tiger Team, rapidly prototyping and validating data architectures
- Optimize data engineering pipelines, ensuring performance at scale
Preferred Qualifications
- Bigtable, Firestore
- Strong communication skills: Able to effectively explain technical decisions to non-technical stakeholders
- Process improvement: Experienced in identifying process pain-points and taking ownership of refining processes to completion
- Collaborative Problem Solver: Able to take initiative to understand a problem and make critical decisions to solve for next actionable steps
- Ability to work in a fast-paced, dynamic environment
Share this job:
Similar Remote Jobs
