Deloitte Data Pipeline Development Lead in Washington, District Of Columbia
US Data Pipeline Development Lead
Our firm is investing in new ways to deliver value to our clients, bringing IP to the market through the use of assets, solutions and products. The Assets and Hybrid Business Ventures offering was established to surface, build, incubate, scale, and maintain new, world-class technology-based assets focused on improving the customer experience. Among our highest-priority asset is TrueServe, an integrated, multi-platform asset that accelerates contact center transformation - the move of legacy systems to cloud; the realization of seamless omnichannel experiences; and hyper personalization through integration of customer data and conversational artificial intelligence.
Work you'll do:
We are looking for a Lead Developer to join the TrueServe Product Group. This role is on the Heartbeat team, a conversational AI platform and ecosystem. The selected candidate will be tasked with for developing capabilities for our Analytics data-pipeline.
Your primary focus will be the development of all cloud infrastructure, server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to requests from the data-pipeline for analytics.
If you have a strong background and experience working with Python, Java, Spring and RDB we want to hear from you.
Your responsibilities will include:
Research, design, and implementation of low-latency, high-availability, and performant capabilities for the data-pipeline.
Writing reusable, testable, and efficient code by implementing automated testing platforms and unit tests.
Implementation of security and data protection
Help identify and propose solutions for technical and organizational gaps in our pipelines by running proof of concepts and experiments working with Data Platform Engineers and Architects on implementation.
Build tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
Contribute to code reviews and design reviews for the team.
Bachelor's Degree in Software Engineering or equivalent experience.
Integration of multiple data sources and databases.
Understanding fundamental design patterns and principles behind a scalable application.
Strong knowledge of automated end to end testing platforms and unit tests.
Problem solver, good communicator, flexible team player, independent thinker, can read and write documentation. With the humility to ask questions.
Experience building and optimizing 'big data' data pipelines, architectures, and data sets, both batch and stream.
Proficient in code versioning tools, such as Git and Code Reviews.
Proficient in writing API documentation in systems like Swagger.
Limited immigration sponsorship may be available.
5+ Years of relevant industry experience as Sr. Data Engineer on AWS Cloud. Experience with Log Parsing frameworks and Kafia or SQS.
Experience with ElasticSearch, Groovy, Spring Framework, Event Monitoring, Terraform, and K8s
Experience with Spring Framework with the SpringBoot stack and Gradle/Maven.
Experience with Data pipeline/streaming tools SQS, Kafka, Spark, and Flink.
Strong understanding of security compliance, user authentication, and authorization between multiple systems (Internal or Third party), servers, and environments.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.