Deloitte Jobs

Job Information

Deloitte Data Engineer - Project Delivery Specialist in Chicago, Illinois

Are you an experienced, passionate pioneer in technology - a solutions builder, a roll-up-your-sleeves technologist who wants a daily collaborative environment, think-tank feel and share new ideas with your colleagues - without the extensive demands of travel? If so, consider an opportunity with Deloitte under our Project Delivery Talent Model. Project Delivery Model (PDM) is a talent model that is tailored specifically for long-term, onsite client service delivery. PDM practitioners are local to project locations, minimizing extensive travel, and provides you with a full career path within the firm.

Work you'll do/Responsibilities

  • Support the implementation of data integration requirements and develop the pipeline of data from raw to curation layers including the cleansing, transformation, derivation and aggregation of data.

  • Communicate effectively (written and spoken) and work with the multi-location development teams and self-manage own work

  • Support in the development of technical solutions to business problems

    The Team

    In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment.

    The AI & Data Operations team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets.

    AI & Data Operations will work with our clients to:

  • Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms

  • Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions

  • Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements

    Qualifications

    Required

  • Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience

  • 5+ years of hands-on experience as a Data Engineer or Big Data developer

  • 5+ years of experience in Python, Spark and SQL

  • 3+ years of experience in building scalable and high-performance data pipelines using Apache Hadoop, Apache Spark, Pig or Hive

  • 3+ years or experience in Python / Unix Shell Scripting

  • Experience with bigdata cross-platform compatible file formats like Apache Avro & Apache Parquet

  • Hands-on big data/ Hadoop performance tuning and optimization experience

  • Strong SQL knowledge with the ability to work with the latest database technologies.

  • Strong data & logical analysis skills

  • Limited immigration sponsorship may be available

  • Travel up to 25% (While 25% of travel is a requirement of the role, due to COVID-19, non-essential travel has been suspended until further notice.)

    Preferred

  • Presto/Dremio

  • HIVE

  • Data Analytics

    All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.

DirectEmployers