Deloitte Data Engineer, Analytics & Cognitive - Senior Consultant (SQL/Python) in Pittsburgh, Pennsylvania
The TeamAnalytics & CognitiveIn this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment.
The Analytics & Cognitive team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets.
Analytics & Cognitive will work with our clients to:
Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms
Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions
Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements
3+ years of relevant technology consulting or industry experience to include experience in Information delivery, Analytics and Business Intelligence based on data
2+ years experience in Python and/or R
2+ years experience SQL
2+ years of hands on experience with data core modernization and data ingestion.
1+ years experience leading workstreams or small teams
Bachelor's Degree or equivalent professional experience
Travel up 50% (While 50% of travel is a requirement of the role, due to COVID-19, non-essential travel has been suspended until further notice
Limited immigration sponsorship may be available.
An advanced degree in the area of specialization is preferred.
Experience with Cloud using Amazon Web Services (AWS), Microsoft Azure, and/or Google Cloud Platform (GCP)
Experience with PySpark, Spark, Scala
Understanding of the benefits of data warehousing, data architecture, data quality processes, data warehousing design and implementation, table structure, fact and dimension tables, logical and physical database design, data modeling, reporting process metadata, and ETL processes.
Experience designing and implementing reporting and visualization for unstructured and structured data sets
Experience designing and developing data cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching
Knowledge of data, master data and metadata related standards, processes and technology
Experience working with multi-Terabyte data sets
Experience with Data Integration on traditional and Hadoop environments
Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.