Deloitte Jobs

Job Information

Deloitte AI Engineer in Atlanta, Georgia

As an AI Engineer, you'll support the development and deployment of transformational AI capabilities for large clients. You'll combine leading open-source tooling and techniques with a suite of customer experience libraries and solutions, which automate the management of cross-channel communications with consumers for large clients. We make heavy use of the Python machine learning ecosystem, and build systems to deliver massive decisioning throughput, with tight latency constraints on our real-time systems. If you have deep experience in designing, implementing, automating, and deploying machine learning pipelines and workflows, we want to hear from you!

Your responsibilities will include:

  • Participate in all phases of the model development lifecycle

  • Solution productionalized machine learning to drastically reduce total cost per decision, moving expensive human-driven decisions to drastically cheaper and more effective machine-driven ones

  • Help design and implement functional requirements for client engagements

  • Collaborate with our services data scientists to deploy and use libraries and APIs which make machine learning for customer use cases both easy and powerful, and help brands gain deep understandings of their consumers

  • Prepare technical documentation

  • Integrate with surrounding technology components and services

  • Coach junior team members

Successful skillsets for this role are:

  • Deep interest in data science and software development

  • Eager to work with data scientists, fellow engineers, and product owners

  • Experienced with collaborative techniques like pair-programming and white board design sessions

  • Continuously learning and improving, and constantly exploring new languages, tools, and techniques

Our team

You'll join a team of passionate, talented "pure" data scientists and hybrid AI engineers who collaborate to design, build, and maintain cutting-edge AI solutions that arm our clients with real-time customer insights that deliver significant value. If you are intellectually curious, hardworking, and solution-oriented, you will fit right into our fast-paced, collaborative environment.



  • Bachelor's Degree in computer science, engineering, or a related field

  • 3+ years of experience authoring, supporting, or providing a data science platform to data scientists:

  • Deep knowledge in the machine learning lifecycle, and in ways to facilitate collaboration and productivity in each of its phases

  • Exposure to working with data scientists and expertise in finding solutions to workflow problems

  • Knowledge of common machine learning frameworks and libraries and in ways to productionalize their inputs and outputs

  • Comfort with various machine learning techniques and their practical implementation, particularly reinforcement learning

  • Experience with one or more common workflow / pipelining frameworks (Kubeflow, MLFlow, Argo or equivalents)

  • Strong knowledge of the Python ecosystem, the Jupyter ecosystem (Lab, Notebook, Binder) and their libraries, norms, and tooling

  • Exposure to AutoML tooling (H2O, DataRobot or equivalents)

  • Experience in deploying and maintaining enterprise-scale machine learning applications in production

  • 2+ years of experience writing well-tested production software

  • 1+ years of experience on distributed, high throughput and low latency architecture

  • 1+ years of experience building software on top of major container technology (Kubernetes, Docker, or similar)

  • Strong testing mindset with experience writing tests at various levels of granularity

  • Familiarity with Continuous Integration tools (GitHub actions, Travis-CI, etc.)

  • A history of good collaboration with DevOps and Project Managers on meeting project goals

  • Proven track record working with products from major cloud providers (AWS, GCP, Azure, etc.)

  • Limited immigration sponsorship may be available

    Helpful, but not required:

  • Experience with large consumer data sets used in performance marketing is a major advantage

  • Exposure and/or expertise writing and/or running Terraform or other infrastructure-as-code automation

  • Well-versed in (or contributes to) data-centric open-source projects

  • Experience in performance analysis and optimization of machine learning applications, e.g., in optimizing code written by others

  • Presence or contributions to projects within the wider open-source ecosystem

  • Proven ability to communicate both verbally and in writing within a high performance, collaborative environment

  • Exposure to commonly used relational and non-relational databases



All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.