-
Lead the design, implementation, and optimization of DevOps processes and tools to support continuous integration, delivery, and deployment across our technology stack.
-
Architect, build, and maintain scalable and secure infrastructure in cloud environments, leveraging technologies such as AWS, Azure, or Google Cloud Platform.
-
Develop and maintain automation scripts and tools using programming languages like Python, Java, or other relevant languages to streamline deployment, monitoring, and maintenance processes.
-
Lead efforts to deploy and manage Hadoop-based systems, including clusters, data processing, and analytics platforms.
-
Collaborate with development, data engineering, and quality assurance teams to ensure seamless integration and delivery of systems and applications.
-
Implement best practices for configuration management, containerization, and orchestration using tools such as Docker, Kubernetes, and other containerization platforms.
-
Drive the implementation of monitoring, logging, and alerting solutions to ensure system health and performance, utilizing tools like ELK stack, Prometheus, Grafana, etc.
-
Lead efforts to establish and maintain security best practices, access controls, and encryption mechanisms for cloud and Hadoop environments.
-
Mentor and provide technical guidance to team members, fostering a culture of continuous learning, collaboration, and innovation within the DevOps team.
-
Stay updated with emerging technologies, industry trends, and best practices in DevOps, cloud, and big data ecosystems.
-
Experience: 12-15 years, Dev Ops Lead
-
Hands on experience to setup the infra in Hadoop clusters, On premise to cloud migration etc.
-
Expert in containerization and deployment on premise and in cloud use cases.
-
Proficient in Linux CLI commands, shell scripting, SQL.
-
Proficient in programming language like (Python or Java)
-
Production Experience building and maintaining CICD pipelines and monitoring using Bitbucket, Jenkins, OCP or Kubernetes
-
Production experience building and maintaining Data pipelines.
-
Experience with Application and Load balancer development and maintenance.
-
Working experience with Big data platform, ELK, Graph database deployment and migration.
-
Front End integration experience on React or Angular JS and API integration.
-
Full stack experience is good to have for end-to-end product life cycle - Build Test Deploy.
-
Experience with Data governance, PII data and tokenization/encryption requirements.
-
Good to have AWS or any cloud migration and CICD pipeline knowledge.
-
Good to have ML OPS understanding and ML model migration understanding.
-
Coordinate with Client and build confidence on solution and create opportunities.
-
Team leading, task management and reporting on time.
-
A highly competitive compensation and benefits package
-
A multinational organization with 48 offices in 19 countries and the possibility to work abroad
-
Laptop and a mobile phone
-
10 days of paid annual leave (plus sick leave and national holidays)
-
Maternity & Paternity leave plans
-
A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region)
-
Retirement savings plans
-
A higher education certification policy
-
Commuter benefits (varies by region)
-
Extensive training opportunities, focused on skills, substantive knowledge, and personal development
-
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses
-
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups
-
Cutting edge projects at the world s leading tier-one banks, financial institutions and insurance firms
-
A flat and approachable organization
-
A truly diverse, fun-loving and global work culture