Deloitte Jobs

Job Information

Deloitte Data Engineer - Project Delivery Specialist in Austin, Texas

Are you an experienced, passionate pioneer in technology - a solutions builder, a roll-up-your-sleeves technologist who wants a daily collaborative environment, think-tank feel and share new ideas with your colleagues - without the extensive demands of travel? If so, consider an opportunity with Project Delivery Practice.

The team

Analytics & Cognitive

In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment.

The Analytics & Cognitive team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets.

Analytics & Cognitive will work with our clients to:

• Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms

• Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions

• Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements

Work you'll do

• Support the implementation of data integration requirements and develop the pipeline of data from raw to curation layers including the cleansing, transformation, derivation and aggregation of data. • Communicate effectively (written and spoken) and work with the multi-location development teams and self-manage own work • Support in the development of technical solutions to business problems

Qualifications Required

• Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience • 5+ years of hands-on experience as a Data Engineer or Big Data developer • 5+ years of experience in Python, Spark and SQL • 3+ years of experience in building scalable and high-performance data pipelines using Apache Hadoop, Apache Spark, Pig or Hive • 3+ years or experience in Python / Unix Shell Scripting • Experience with bigdata cross platform compatible file formats like Apache Avro & Apache Parquet • Hands on big data/ Hadoop performance tuning and optimization experience • Strong SQL knowledge with ability to work with the latest database technologies. • Strong data & logical analysis skills

• Must be willing to relocate to San Jose CA, or Austin TX • Limited immigration sponsorship may be available

• Travel up to 25% (While 25% of travel is a requirement of the role, due to COVID-19, non-essential travel has been suspended until further notice.)


  • Presto/Dremio

  • HIVE

  • Data Analytics

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.