A Fortune 100 client is seeking to hire an AI Safety & Security Engineer/Analyst to join an organization that focused on AI Observability and enabling AI. An ideal candidate will join and look at the clients environment from a holistic POV, from a platform perspective and make sure the correct safety and security standards are in place. This candidate should be well versed on what tool(s) that are needed and what can best improve AI Safety & Security. This group does not have the most mature AI environment so need someone to act as a consultant within and make sure to advise as needed and what to implement and improve. Some examples are as follows:
- Design and implement measures to secure AI models and data pipelines from threats such as data poisoning, model theft, or inference attacks.
- Perform threat modeling and vulnerability assessments for AI-driven applications.
- Ensure alignment with ethical AI principles, including fairness, accountability, transparency, and explainability.
- Monitor AI systems for compliance with safety standards and industry best practices.
- Integrate encryption, secure access, and robust authentication into AI systems.
- Collaborate with data scientists to build fail-safe mechanisms and redundancies in AI systems.
- Conduct audits to identify and mitigate bias in datasets, models, and decision-making processes.
- Support regulatory compliance efforts (e.g., GDPR, CCPA, AI Act) and adherence to internal RAI guidelines
We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to
HR@insightglobal.com. The EEOC "Know Your Rights" Poster is available
here.
To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy:
https://insightglobal.com/workforce-privacy-policy/ .
Proven experience with AI/ML model deployment, adversarial ML techniques, and robust system design.
Strong knowledge of AI ethics, RAI frameworks, and relevant regulations (e.g., AI Act, GDPR).
Expertise in cybersecurity principles, including threat modeling, cryptographic protocols, and secure coding practices.
Proficiency with security tools (e.g., OWASP, SIEM, SAST/DAST tools) and AI frameworks (e.g., PyTorch, TensorFlow, Hugging Face).
Experience with AI model interpretability tools (e.g., SHAP, LIME) and bias detection frameworks.
Familiarity with open-source libraries to customize the PepsiCo policies
Knowledge of secure containerization (e.g., Docker, Kubernetes) and cloud-native AI deployments.
Certifications such as Certified Ethical Hacker (CEH), Offensive Security Certified Professional (OSCP), AI ethics certification, CISSP or AWS/Azure security certifications
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.