Google-logo
Google
ยท
July 30, 2025
Apply Now
This job has closed.

Security Engineer, AI Agent Security

New York, NY
Full-time
Onsite
$141K/yr - $202K/yr
Entry, Mid Level
Google specializes in internet-related services and products, including search, advertising, and software. The Security Engineer role focuses on creating and maintaining a safe operating environment by identifying and addressing security threats related to AI agents and large language models.
Apply Now

Responsibilities

  • Conduct research to identify, analyze, and understand novel security threats, vulnerabilities, and attack vectors targeting AI agents and underlying LLMs (e.g., advanced prompt injection, data exfiltration, adversarial manipulation, attacks on reasoning/planning).
  • Design, prototype, evaluate, and refine innovative defense mechanisms and mitigation strategies against identified threats, spanning model-based defenses, runtime controls, and detection techniques.
  • Develop proof-of-concept exploits and testing methodologies to validate vulnerabilities and assess the effectiveness of proposed defenses.
  • Collaborate with engineering and research teams to translate research findings into practical, scalable security solutions deployable across Google's agent ecosystem.
  • Stay current with the AI security, adversarial ML, and related security fields through literature review, conference attendance, and community engagement.

Qualification

Required

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience with security assessments or security design reviews or threat modeling and common attack vectors, and mitigation principles.
  • 2 years of experience with security engineering, computer and network security and security protocols.
  • 2 years of coding experience in one or more general purpose languages.
  • 2 years of experience in security research, vulnerability analysis, pen testing, or a similar role, including analyzing systems, identifying security weaknesses, and thinking like an attacker.

Preferred

  • Master's or PhD degree in Computer Science or a related technical field with a specialization in Security, AI/ML, or a related area.
  • Experience in security research contributions (e.g., publications in relevant security/machine learning venues, common vulnerabilities and exposures conference talks, open-source tools).
  • Experience in AI/ML security research, including areas like adversarial ML, prompt injection, model extraction, or privacy-preserving machine learning.
  • Experience developing or evaluating security controls for large-scale systems.
  • Experience in secure coding practices, vulnerability analysis, security architecture, and web security.
  • Familiarity with the architecture and potential failure modes of LLMs and AI agent systems.

Benefits

Google specializes in internet-related services and products, including search, advertising, and software. It is a sub-organization of Alphabet.
Glassdoor
4.4
4.4
Founded in 1998
Mountain View, California, USA
10001+ employees
https://www.google.com
5 other Similar Jobs
No items found.