
Solutions Architect - Data Engineering

Databricks
Summary
Join Databricks as a Specialist Solutions Architect (SSA) - Data Engineering and guide customers in building big data solutions on the Databricks platform. You will work directly with customers, collaborating with Solution Architects to provide hands-on support and expertise in Apache Sparkβ’ and other data technologies. This customer-facing role involves assisting with the design and implementation of essential workloads, aligning customer technical roadmaps with the Databricks Data Intelligence Platform. As a go-to expert, you will report to the Specialist Field Engineering Manager and enhance your technical skills through mentorship and training programs. You will develop a specialty area, such as streaming, performance tuning, or industry expertise. The role offers opportunities for technical leadership, architectural design, data engineering, and model deployment. Remote work is possible.
Requirements
- 5+ years of experience in a technical role with expertise in at least one of the following: Software Engineering/Data Engineering: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions
- Data Applications Engineering: Build use cases that use data, such as risk modeling, fraud detection, customer life-time value
- Extensive experience building big data pipelines
- Experience in maintaining and extending production data systems to evolve with complex needs
- Deep Specialty Expertise in at least one of the following areas: Experience scaling big data workloads (such as ETL) that are performant and cost-effective
- Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP
- Experience with large-scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines
- Expert with cloud data lake technologies, such as Delta and Delta Live
- Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience
- Production programming experience in SQL and Python, Scala, or Java
- 2 years of professional experience with big data technologies (Ex: Spark, Hadoop, Kafka) and architectures
- 2 years of customer-facing experience in a pre-sales or post-sales role
- Can meet expectations for technical training and role-specific outcomes within 6 months of hire
- Can travel up to 30% when needed
Responsibilities
- Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
- Architect production-level data pipelines, including end-to-end pipeline load performance testing and optimization
- Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows
- Assist Solution Architects with more advanced aspects of the technical sale, including custom proof of concept content, estimating workload sizing, and custom architectures
- Provide tutorials and training to improve community adoption (including hackathons and conference presentations)
- Contribute to the Databricks Community
Benefits
- Annual performance bonus
- Equity
- This role can be remote
Share this job:
Similar Remote Jobs

