Deloitte Jobs

Job Information

Deloitte Data Engineer in Austin, Texas

S&A- Data Engineer - Project Delivery Specialist - PDM

Are you an experienced, passionate pioneer in technology - a solutions builder, a roll-up-your-sleeves technologist who wants a daily collaborative environment, think-tank feel and share new ideas with your colleagues - without the extensive demands of travel? If so, consider an opportunity with our Project Delivery team.

Work you'll do/Responsibilities

  • Partner with product, analytics, and data engineering in interpreting business and analytics requirements and converting them into robust data pipelines

  • Work with feature and data engineering to drive product reporting and support development

  • Support reporting for multiple projects concurrently

The Team

The AI & Data Operations team provides managed AI, intelligent automation, and data dev ops services across the Advise-Implement-Operate spectrum and in flexible engagement models to help clients drive fast innovation and achieve sustained business outcomes at scale. AI & Data Operations goes to market through four Market Offerings:

  • AI Foundry: Offer a full portfolio of capabilities and services required to help clients accelerate and scale their AI/ML/Advanced Analytics journey from data to insights.

  • Data DevOps: Administer day-to-day operations with dev ops features tied to managing data foundries, data applications and data production systems including data pipeline, data curation, data management and data delivery.

  • Intelligent Automation: Leverage robotic and intelligent automation technologies to re-imagine business processes, augmenting human workforce with an AI-enabled digital workforce.

  • AI & Data Tech Preferred Provider: Drive engagement with our clients on large scale technology arrangements that enable foundry models as well meet capacity-based contract needs.

Qualifications Required

  • Experience in Java, Spark (Scala / Python), Pig /Hive, SQL, Map Reduce (optional)

  • Minimum 3 years of experience in building scalable and high-performance data pipelines using Apache Hadoop

  • Experience with bigdata cross platform compatible file formats like Apache Avro & Apache Parquet

  • Hands-on debugging experience

  • Hands-on performance tuning and optimization experience

  • Experience in building and maintaining data quality frameworks

  • Experience in automating code deployments

  • Ability to improve existing codebases using standard methodologies

  • Ability to collaborate optimally with teams located in locations

  • Must be willing to work in the Austin, TX

  • Bachelor's Degree preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience

  • Limited Sponsorship: Limited immigration sponsorship may be available

  • Travel up to 10% annually

    #IND:CONS

    All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.

DirectEmployers