Architect, design and build pipelines that deliver data with measurable quality under the SLA
Partner with Data Engineers, domain experts, data analysts and other teams to build foundational data sets that are trusted, well understood, aligned with business strategy and enable self-service
Be a champion of the overall strategy for data governance, security, privacy, quality and retention that will satisfy business policies and requirements
Own and document design and data lineage
Identify, document and promote best practices
Support and Maintain analytics tech ecosystem (data warehouse, ETL and BI tools)
What you should have
BS or MS degree in Computer Science or Engineering discipline.
At least 12 years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.
At least 6 years of experience working in cross-functional teams and collaborating with business stakeholders in business areas such as Finance/HR/Sales/Marketing in support of a departmental and/or multi-departmental data management and analytics initiative.
Very strong experience in dimensional modeling, supporting data warehouse, scaling and optimizing, performance tuning and ETL pipelines
Deep understanding of relational as well as big data setup. Preferred: Prior experience with Snowflake, ETL tools (eg Informatica, Matillion, Snaplogic), Hive, Presto, Dimensional Modeling.
Problem solver with excellent interpersonal skills with ability to make sound complex decisions in a fast-paced, technical environment.
Ability to work on multiple areas like Data pipeline ETL, Data modeling & design, writing complex SQL queries etc.
Capable of planning and executing on both short-term and long-term goals individually and with the team.
Passionate about various technologies including but not limited to SQL/No SQL/MPP databases etc.
Hands-on experience with Data Warehouse technologies (Snowflake, Redshift) and Big Data technologies (e.g Hadoop, Hive, Spark)
Proficiency with programming languages is a big plus (e.g. Python)
Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners
Excellent understanding of trade-offs
Demonstrated ability to navigate between big-picture and implementation details