Data Software Engineer

TeraWatt Infrastructure
Summary
Join Terawatt Infrastructure, a leader in EV charging solutions, as a Software Engineer. You will design and implement scalable applications to support data needs, collaborating with data scientists and other teams. Key responsibilities include building and maintaining micro-services that integrate with our data-lake, developing data models and databases, and ensuring data governance and quality assurance. You will contribute to efficient and scalable data infrastructures, aligning with company standards. A basic understanding of software development and data engineering best practices is essential. The role involves working with various data sources and technologies to optimize data systems for business decision-making. Grow your career with a company committed to impacting climate change and fostering a diverse and inclusive workplace.
Requirements
Highly skilled and motivated
Responsibilities
- Design, build, and maintain scalable micro-services, that can integrate with our data-lake
- Architect, build, optimize, and maintain ETL/ELT pipelines for seamless data ingestion and transformation from multiple data sources into the data-lake
- Develop and enforce data governance and quality assurance standards to ensure data accuracy, integrity, and consistency across systems
- Implement best practices for data modeling and database design to support business intelligence and analytics needs
- Collaborate with data analysts, scientists, and other stakeholders to understand data requirements and deliver efficient data solutions
- Conduct regular data validation, troubleshooting, and performance tuning of data systems to optimize efficiency
- Collaborate with other software engineers to integrate data solutions, leveraging a basic understanding of API development and data flow within software systems
Preferred Qualifications
- Bachelorβs degree in Computer Science, Data Engineering, or a related field
- 3+ years of experience in software engineering, with a focus on data pipelines and architecture
- Proficiency in designing and implementing data warehouses, databases, and data lakes
- Experience with cloud platforms such as AWS, GCP, or Azure for data storage and processing
- Expertise in SQL and proficiency with NoSQL databases (e.g., MongoDB)
- Strong knowledge of ETL/ELT processes and tools (e.g., databricks, Airflow, or AWS Glue)
- Experience in data modeling, schema design, and performance tuning
- Hands-on experience with Big Data technologies like Hadoop, Spark, and Kafka
- Understanding of data governance frameworks and quality assurance processes
Benefits
$103,000 - $120,000 a year
Share this job:
Similar Remote Jobs
