Job Description :

Job Description : 

  • Design, develop, and implement security frameworks and strategies to protect AI/ML models and their use, and related data, applications and systems from adversarial attacks and other security threats.
  • Develop standards and best practices for a secure use, development, deployment, and operationalization of AI/ML (predictive AI, generative AI and Large Language Models).
  • Analyze potential security risks in AI/ML applications, such as model poisoning, data leakage, and other adversarial machine learning threats, and define mitigations that can be effectively implemented.
  • Collaborate with cross-functional teams to ensure AI/ML systems are integrated, deployed or leveraged with robust security practices throughout the development lifecycle of proprietary models, or through the implementation of pre-trained models, AI-based SaaS solutions, ...
  • Research and stay ahead of emerging security threats in AI/ML and propose innovative defense strategies.
  • Conduct security assessments and robustness testing of AI/ML models, with appropriate tooling, identifying weaknesses and providing recommendations for improvement.
  • Collaborate with internal teams to ensure compliance with relevant regulations, standards, and security frameworks in AI/ML-related initiatives.
  • Provide guidance and act as centre of expertise for business, technical, legal, privacy and risk teams on assessing risks and implementing controls for AI/ML projects.
  • Effectively communicate complex AI/ML security assessments, risks, controls and mitigations to management, technical teams and non-technical stakeholders.

What you need to be successful:

  • University degree in Computer Science, AI/ML, Cybersecurity or related field, or equivalent experience.
  • 8-10 years of relevant experience, including i n AI/ML models development and deployment.
  • Proficiency in programming languages such as Python, Java, or C++, and in AI/ML frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, Keras, and XGBoost.
  • Strong understanding of security concepts, including secure coding practices, threat modeling, and risk assessment.
  • Expertise in securing AI/ML systems, including protection against adversarial attacks, data poisoning, ensuring the integrity of model training and inference processes, confidentiality of model and trained data.
  • Strong analytical and problem-solving skills, attention to detail, and ability to work in a collaborative team environment.
  • Excellent communication skills, including the ability to translate complex technical information for a non-technical audience.

We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.

             

Similar Jobs you may be interested in ..