Deloitte Data Engineer, Python/SQL, Senior Consultant in Chicago, Illinois
Work you'll do
Consultants work within an engagement team. Key responsibilities will include:
Function as integrators between business needs and technology solutions, helping to create technology solutions to meet clients' business needs.
Defining systems strategy, developing system requirements, designing, prototyping, and testing custom technology solutions, and supporting system implementation.
Analytics & Cognitive
In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment.
The Analytics & Cognitive team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets.
Analytics & Cognitive will work with our clients to:
Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms
Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions
Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements
3+ years of relevant technology architecture consulting or industry experience to include experience in Information delivery, Analytics or Artificial Intelligence based on data
3+ years of experience in Python
3+ year experience SQL
2 year of hands on experience with data core modernization and data ingestion.
Experience with Data Integration on traditional and Hadoop/Cloudera environments
2+ year experience leading workstreams or small teams
Bachelor's Degree or equivalent professional experience
Travel up to 50% (While 50% of travel is a requirement of the role, due to COVID-19, non-essential travel has been suspended until further notice.)
Limited sponsorship may be available
An advanced degree in the area of specialization is preferred.
Deep hands-on skills with Python(PySpark)
Experience with Artificial Intelligence, Machine Learning, Computer Vision and/or Conversational/NLP applications
Experience in designing and implementing scalable, distributed systems leveraging cloud computing technologies like AWS EC2, AWS Elastic Map Reduce and Microsoft Azure
Experience working with multi-Terabyte data sets
Experience designing and developing data cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching
Knowledge of data, master data and metadata related standards, processes and technology
Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.