Deloitte Senior Data Engineer in Greensboro, North Carolina
Senior Data Engineer
At Human Experiences (Hux) by Deloitte Digital, we aim at delivering elevated human experience by re-inventing digital advertising & marketing, powered by data, AI/ML and content. With customer data as the foundation, we help clients deliver dynamic, personalized customer experiences across all engagement channels using machine learning and artificial intelligence.
Work you'll do:
As a Senior Data Engineer, you will take the lead on developing and deploying transformational AI capabilities for large clients. You'll combine leading open-source tooling and techniques with a suite of customer experience libraries and solutions, which automate the management of cross-channel communications with consumers for large clients. This includes integrating multiple terabytes of data across business units in a low latency and auto-scaled environment. If you have deep experience in building, automating and deploying data pipelines and workflows, we want to hear from you!
Your responsibilities will include:
Design, build and support scalable data pipelines, systems, and APIs for AdTech and MarTech products, specifically for Identity Resolution
Use distributed computing frameworks, graph-based and other cutting-edge technologies to resolve identities at scale
Lead cross-functional initiatives and collaborate with multiple, distributed teams
Produce high quality code that is robust, efficient, testable and easy to maintain
Deliver operational automation and tooling to minimize repeated manual tasks
Participate in code reviews, architectural decisions, give actionable feedback, and mentor junior team members
Influence product roadmap and help cross-functional teams to identify data opportunities to drive impact
Bachelor's or master's degree in Computer Science or related technical field, or equivalent practical experience
5+ years of software development or data engineering experience in Python (preferred), Spark (preferred), Scala, Java or equivalent technologies
Experience designing and building highly scalable data pipelines (using Airflow, Luigi, etc.)
Knowledge and experience of working with large datasets
Proven track record of working with cloud technologies (Azure, AWS, GCP, etc.)
Experience with developing or consuming web interfaces (REST API)
Experience with modern software development practices, leveraging CI/CD and containerization such as Docker
Self-driven with a passion for learning and implementing new technologies
A history of working collaboratively with a cross-functional team of engineers, data scientists and product managers
Limited immigration sponsorship may be available.
Ability to travel up to 30%, on average, based on the work you do and the clients and industries/sectors you serve.
Experience with distributed computing or big data frameworks (Apache Spark, Apache Flink, etc.)
Experience with or interest in implementing graph-based technologies
Knowledge of or interest in data science & machine learning
Experience with backend infrastructure and how to architect data pipelines
Knowledge of system design and distributed systems
Experience working in a product engineering environment
Experience with data warehouses (Snowflake, Redshift etc.)
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.