Data Engineer

Toku
Summary
Join Toku, a company creating enterprise cloud communications and customer engagement solutions, as a Data Engineer. You will ensure the high quality and reliability of cutting-edge contact center and unified communication platforms, contributing to exceptional customer experiences. This impactful position involves shaping new processes, bringing new ideas, and selecting tools in a collaborative environment. You will be responsible for data pipeline design and development, data infrastructure management, data quality management, and data security and privacy management. The role requires collaboration with stakeholders across the organization and a passion for quality and detail. This is a growth-phase position within a highly visible team.
Requirements
- Proficiency in languages like Python and SQL for data manipulation, analysis, and automation
- Expertise in tools like Databricks, Spark, and Kafka for handling large and complex datasets
- Knowledge of data warehousing concepts and ETL/ELT processes to design and implement data pipelines
- Familiarity with cloud platforms (AWS) for deploying and managing data infrastructure
- Understanding of both relational (SQL) and NoSQL databases
- Ability to design efficient data models to support business needs
- At least a Bachelorโs degree in Data Science / Information Technology or a relevant field
- Around 3+ years of total relevant experience in Data Engineering
- Significant experience with Databricks, SQL Query language, Python, ETL processes, and best practices for data engineering
- Proficiency in languages like Python, SQL for data manipulation, analysis, and automation
- Working knowledge and familiarity with a variety of databases (SQL and NoSQL)
- Strong analytical and problem-solving skills to resolve data-related challenges
- Ability to work collaboratively in cross-functional teams
- Able to think critically and innovate to improve data processes
- Effective Communication skills to collaborate with business stakeholders
- Knowledge with Agile methodologies and experience working in Agile environments
Responsibilities
- Building robust and efficient data pipelines to extract, transform, and load data from various sources
- Ensuring optimal performance and scalability of the data warehouse
- Designing and implementing a scalable data architecture to support future growth and innovation
- Providing clean and reliable datasets to stakeholders to assist them in building and optimizing products into innovative industry leaders
- Identifying opportunities to automate manual tasks and optimize data delivery
- Assisting stakeholders in leveraging data to drive product innovation
- Ensuring the data infrastructure can handle future growth and maintain high availability
- Maintaining data accuracy, integrity, and consistency to support reliable decision-making
- Adhering to data standards, security protocols, and compliance regulations
- Staying informed about emerging technologies and their potential benefits
- Following industry best practices to optimize data pipelines and processes
- Actively participate in knowledge sharing and contribute to the growth and development of the Data Engineering team
- Provide guidance and mentorship to fellow data engineers, offering support and training to enhance their skills and performance
- Maintaining excellent interpersonal skills, with strong written and oral communication abilities in English
- Ability to work independently and in a fast-paced, dynamic startup environment
- Fostering a continuous learning mindset, staying up-to-date with the latest trends and technologies
Preferred Qualifications
- Experience with Amazon Redshift
- Good to have exposure/experience in building and optimizing โbig dataโ data pipelines, architectures, and data sets