Job Description :

Immediate need for a talented AWS DevOps/MLOps Engineer. This is a 07+months contract opportunity with long-term potential and is located in Plano, TX (Onsite). Please review the job description below and contact me ASAP if you are interested.
 
Job ID: 24-37726
 
Pay Range: $55 - $60/hour.  Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
 
Key Requirements and Technology Experience:

  • Key Skills:ML Ops , AWS, DevOps.
  • Experience in AWS system and network architecture design, with specific focus on AWS Sagemaker and AWS ECS.
  • Experience developing and maintaining Client systems built with open source tools.
  • Experience developing with containers and Kubernetes in cloud computing environments.
  • Experience with one or more data-oriented workflow orchestration frameworks (KubeFlow, Airflow, Argo).
  • Design the data pipelines and engineering infrastructure to support our clients’ enterprise machine learning systems at scale.
  • Develop and deploy scalable tools and services for our clients to handle machine learning training and inference.
  • Support model development, with an emphasis on auditability, versioning, and data security.
  • Experience with data security and privacy solutions such as Denodo, Protegrity, and synthetic data generation.
  • Ability to develop applications using Python and deploy to AWS Lambda and API Gateway.
  • Ability to develop Jenkins pipelines using the groovy scripting.
  • Good understanding in testing frameworks like Py/Test.
  • Ability to work with AWS services like S3, DynamoDB, Glue, Redshift and RDS
  • Proficient understanding of Git and version control systems
  • Familiarity with continuous integration and continuous deployment.
  • Develop the terraform modules to deploy the standard infrastructure.
  • Ability to develop the deployment pipelines using the Jenkins, XL Release
  • Experience in Python boto3 to automate the cloud operations.
  • Experience in documenting technical solutions and solution diagrams.
  • Good understanding of the simple python applications which can be deployed as a docker container.
  • Experiencing in creating workflows using AWS step functions.
  • Create the docker images using the custom python libraries.
  • AWS (experience mandatory): S3, KMS, IAM, EC2, ECS, BATCH, ECR, Lambda, Data Sync, EFS, IAM Roles, Policies, Cloud Trail, Cost Explorer, ACM, AWS Route53, SNS, SQS, ELB, CloudWatch, Lambda and VPC, Service Catalog ·
  • Automation (experience mandatory): Terraform, Python (boto3), serverless, Jenkins (Groovy), NodeJs · Bigdata (Knowledge): Redshift, DynamoDB, Databricks, Glue, and Athena.
  • Data science (Experience): Sagemaker, Athena, Glue, DynamoDB, Databricks, MWAA (Airflow), ·
  • DevOps (experience mandatory): Python, Terraform, Jenkins, GitHub, Make files, and Shell scripting.
  • Data Virtualization (Knowledge) : Denodo · Data Security (Knowledge): Protegrity.
  • Bachelor’s degree from a reputed institution/university.
  • 10+ years of building end-to-end systems as a Platform Engineer, Client DevOps Engineer, or Data Engineer.
  • 4+ Years of experience in python, groovy, and java programming.
  • Experience working in the SCRUM Environment.

Our client is a leading IT Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
 
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
#Del



Client : Pyramid Consulting, Inc

             

Similar Jobs you may be interested in ..