Deloitte Jobs

Job Information

Deloitte Data Dev Ops Consultant in Austin, Texas

AI and Data Ops

Data Dev Ops Consultant - Data Operations & Engineering

We establish and operate the data fabric of our client's largest and most complex deployments enabling organizations to adapt to the rapidly changing business needs using their core data platforms, applications and solutions. And we deliver this using a combination of people, automation, AI and industry leading practices to enable intelligent operations across all levels of enterprise.

We provide flexible, long-term, staffing models that allow us to optimize the data life cycle, drive agility and scale and put data in the hands of decision makers faster and drive enhanced analytics insights.

Deloitte's AI and Data Operations offering provides our clients a proven approach to optimizing, modernizing and operating their data and analytics capabilities, as well as their platforms and infrastructure. Deloitte supports Clients as they transition their data and analytics investment and operational focus from routine capabilities to a shift to driving business value and innovation.

Work you'll do

Consultants in our Data Operations and Engineering capability work within an engagement team. Key responsibilities will include:

  • Support management and operations (including minor enhancements) of end to end data integration and processing pipelines, underlying data platforms including data warehouses, marts and data lakes as well as enabling data needs for reporting, visualization and analytics applications.

  • Work side-by-side with our Data Modernization and Analytics implementation teams leveraging Data DevOps constructs, principles and tools to support sustainment of each agile sprint / release delivered

  • Manage day to day interaction with client stakeholders to ensure satisfactory measurement and reporting of key Data DevOps metrics

  • Coordination between multiple delivery centers to manage to engagement service level agreements

  • Responsible for supporting and leading project threads

  • Identify and solve problems using analysis, experience, and judgment

  • Deliver high quality work and adapt to new challenges, as an individual or as part of a team

    The Team

    AI and Data Operations

    In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment.

    The AI and Data Operations team leverages the power of data, analytics, Intelligent automation, science and cognitive technologies combined with the Data Dev Ops approach to help uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy and Analytics and Cognitive practices, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets.

The AI & Data Operations team will work with our clients to:

  • Stand up and operate their data and analytics related capabilities, applications and infrastructure by providing multi-year managed services leveraging the constructs and principles of Data Dev Ops

  • Provide capacity by embedding talent in our client organizations to quickly scale data, business intelligence, visualization and analytical teams up and down, on-demand

  • Support as-a-service based subscription models at scale that include analytics and data assets supporting industry and function-specific needs

    Qualifications

    Required:

  • 2+ years of relevant technology consulting or industry experience in Data Engineering, Data pipeline management, Information delivery, Analytics and Business Intelligence

  • 1+ year working in an operations and maintenance role on data platforms projects (e.g., Data Warehouses, Data Marts and Enterprise Data Lake projects), preferably on cloud environments

  • 1+ year of hands on experience with data warehouse, data mart and data lake implementations, core modernization, data ingestion and data extraction

  • Experience with Data Integration and Big Data technologies including some (not all) of the following - ETL/ELT tools like Informatica and Talend, Spark, Airflow, Cloud Based Data Integration tools like AWS Glue, GCP Data Flow and Azure Data Factory

  • Hands-on experience in implementing or operating data and analytics solutions on at least one cloud platform (AWS, Azure, GCP)

  • Experience working with DevOps methodology and tools like GitHub, Jenkins, Maven, Bitbucket, etc.

  • Experience designing and developing data cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching

  • Should be well versed with ITSM (IT Service Management) tools and the individual modules of ITSM

  • Incident Management

  • Change Management

  • Problem Management

  • Escalation Management

  • Release Management

  • Configuration Management

  • Must have a good understanding in troubleshooting issues with Data Integration and Big Data technologies mentioned previously

  • Should be experienced in troubleshooting performance issues in the data integration, provisioning and load processes of Data Warehouses, Data Marts and Data Lakes

  • Should be able to monitor the server health real time and provide recommendations

  • Must have experience in debugging issues with ETL tools (like Informatica, Talend etc.) and data integration languages (like Spark etc.)

  • Strong SQL knowledge

  • Ability to produce and maintain accurate documentation and operational reports

  • 2+ years experience leading workstreams or small teams

  • Travel up to 50% (While 50% is a requirement of the role, due to COVID-19, non-essential travel has been suspended until further notice)

  • Bachelor's Degree or equivalent professional experience

  • Limited sponsorship may be available

    Preferred:

  • AWS, GCP or Azure Certification, Hadoop Certification or Spark Certification

  • ITIL Certification

  • Experience with Cloud using Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP)

  • Experience with data integration products like Informatica Power Center Big Data Edition (BDE), Talend etc.

  • Experience designing and implementing reporting and visualization for unstructured and structured data sets

  • Experience in designing and implementing scalable, distributed systems leveraging cloud computing technologies like AWS EC2, AWS Elastic Map Reduce and Microsoft Azure

  • Experience designing and implementing data ingestion techniques for real time and batch processes for video, voice, weblog, sensor, machine and social media data into Cloud ecosystems or on-prem Hadoop or Datawarehouse ecosystems.

  • Knowledge of data, master data and metadata related standards, processes and technology

  • Experience working with multi-Terabyte data sets

  • Experience with Data Integration on traditional, Hadoop and Cloud environments

  • Ability to work independently, manage small engagements or parts of large engagements.

  • Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).

  • Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.

  • Eagerness to mentor junior staff.

  • An advanced degree in the area of specialization is preferred.

    #LI-CONS #IND:CONS

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.

DirectEmployers