GoodVibeCode
Jobs/AI Security Engineer
AM

AI Security Engineer

Applied Materials

$108K–149K/yrSanta Clara, California, United StatesFull-time10+ yearsOn-site
Posted 1 day ago· America/Chicago· Austin, Texas, United States

About This Role

Who We Are Applied Materials is a global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips – the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world – like AI and IoT. If you want to push the boundaries of materials science and engineering to create next generation technology, join us to deliver material innovation that changes the world. What We Offer Salary: $108,000.00 - $148,500.00 Location: Austin,TX, Santa Clara,CA You’ll benefit from a supportive work culture that encourages you to learn, develop, and grow your career as you take on challenges and drive innovative solutions for our customers. We empower our team to push the boundaries of what is possible—while learning every day in a supportive leading global company. Visit our Careers website to learn more. At Applied Materials, we care about the health and wellbeing of our employees. We’re committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our benefits. Role Summary The AI Security Engineer is responsible for securing the enablement and use of AI, GenAI, LLM, and agentic technologies across the enterprise, balancing business velocity with protection of Applied Materials’ intellectual property, sensitive data, and customer trust. This role drives AI security governance, risk management, technical guardrails, and operational oversight for AI systems and AI‑integrated applications across the full lifecycle—from intake and design through deployment, monitoring, and incident response. The role serves as a key focal point for AI security execution in the US and partners closely with global counterparts and cross‑pillar security teams to deliver scalable, measurable, and auditable AI security controls. Key Responsibilities Technical Mindset & Operating Style Highly technology‑savvy and continuously current on rapidly evolving AI/LLM platforms, agent frameworks, developer tooling, and emerging attack techniques through hands‑on experimentation and learning. Brings strong engineering intuition through prior software development experience or equivalent hands‑on technical background, enabling effective architecture reviews, threat modeling, and pragmatic security guidance. Comfortable reading, writing, and reviewing code (e.g., Python, TypeScript, or similar) to understand AI workflows, model integrations, APIs, pipelines, and real‑world failure modes. Practical experience experimenting with AI tooling, copilots, agents, and “vibe‑coding” workflows, with an understanding of how developers’ prototype, iterate, and ship AI‑enabled systems. Able to translate modern developer behaviors (prompt‑driven development, agent orchestration, rapid iteration) into realistic, enforceable security controls rather than theoretical policy. Uses technical credibility to influence engineering teams, accelerate adoption of secure AI patterns, and ensure security enables—rather than blocks—innovation. AI Security Governance & Intake Own enterprise AI discovery, inventory, and intake workflows covering AI use cases, models, tools, agents, and integrations Define and enforce AI risk tiering and classification (data sensitivity, model risk, autonomy level, exposure) Partner with AI Governance, Legal, Privacy, and Risk teams to establish approval, exception, and waiver processes Ensure AI security controls align with enterprise risk management and audit expectations AI Threat Modeling & Risk Management Lead AI‑specific threat modeling, including prompt injection, data leakage, model poisoning, tool abuse, agentic risk, and supply‑chain threats Define secure AI architecture patterns and prohibited design patterns Conduct and oversee risk assessments for LLM‑integrated applications, internal copilots, and external AI services Track AI security risks and exceptions through remediation and closure Technical Controls & Guardrails Define and operationalize AI security guardrails, including: Authentication and authorization for AI systems Data boundaries, retention, and usage controls Output/content controls and policy enforcement Identity, secrets, and key management for AI workloads Lead security requirements for agent frameworks, MCP servers/clients, AI gateways, and proxies Partner with AppSec and Platform teams to deliver secure “paved‑road” AI solutions for engineering teams Secure AI Lifecycle, Testing & Monitoring Establish secure AI lifecycle gates (pre‑prod, prod, post‑deployment) Own AI security testing and validation, including red teaming, abuse testing, and guardrail effectiveness Define requirements for telemetry, audit logging, and retention for AI sessions, tool calls, and memory usage Integrate AI signals into SIEM, detection, and incident response workflows Incident Response & Continuous Improvement Own AI‑specific detection use cases and alerting strategies Partner with IR teams to develop and maintain AI incident response posture and integration with SIEM tools Lead post‑incident reviews and drive control improvements Publish executive and operational AI security metrics and dashboards Required Qualifications 10+ years in security architecture, application security, cloud/platform security, or related fields Demonstrated experience securing AI/ML or LLM‑based systems in enterprise environments Strong background in threat modeling, secure design, and risk management Experience working cross‑functionally with engineering, product, legal, and compliance teams Strong written and verbal communication skills, including executive‑level communication Preferred Qualifications Prior experience as a software engineer, platform engineer, or security engineer with significant coding responsibilities Experience with AI governance frameworks or enterprise risk management programs Familiarity with security testing, red teaming, and detection engineering Experience building security programs with clear KPIs, metrics, and audit readiness Additional Information Time Type: Full time Employee Type: Assignee / Regular Travel: No Relocation Eligible: No The salary offered to a selected candidate will be based on multiple factors including location, hire grade, job-related knowledge, skills, experience, and with consideration of internal equity of our current team members. In addition to a comprehensive benefits package, candidates may be eligible for other forms of compensation such as participation in a bonus and a stock award program, as applicable. For all sales roles, the posted salary range is the Target Total Cash (TTC) range for the role, which is the sum of base salary and target bonus amount at 100% goal achievement. Applied Materials is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law. In addition, Applied endeavors to make our careers site accessible to all users. If you would like to contact us regarding accessibility of our website or need assistance completing the application process, please contact us via e-mail at Accommodations_Program@amat.com, or by calling our HR Direct Help Line at 877-612-7547, option 1, and following the prompts to speak to an HR Advisor. This contact is for accommodation requests only and cannot be used to inquire about the status of applications.

Responsibilities

The AI Security Engineer is tasked with securing the enablement and use of AI, GenAI, LLM, and agentic technologies across the enterprise, balancing business needs with the protection of intellectual property and sensitive data. This role drives AI security governance, risk management, technical guardrails, and operational oversight for AI systems throughout their entire lifecycle.

Requirements

Candidates must have 10+ years in security architecture, application security, or related fields, with demonstrated experience securing AI/ML or LLM-based systems in enterprise settings. A strong background in threat modeling, secure design, risk management, and cross-functional collaboration with engineering and legal teams is required.

Benefits

Health InsuranceBonusStock Award Program

Skills & Tags

AI SecurityGenAILLMRisk ManagementThreat ModelingPythonTypeScriptArchitecture ReviewsSecurity ControlsIncident ResponseAgent FrameworksSecurity GovernanceData SecuritySecure LifecycleRed TeamingSIEM Integration

Keywords

AIArtificial IntelligenceGenerative AILLMAgentic TechnologiesSecurity GovernanceRisk ManagementIntellectual PropertyData ProtectionThreat ModelingPrompt InjectionModel PoisoningSupply Chain ThreatsPythonTypeScriptSoftware DevelopmentArchitecture ReviewsSecurity ControlsAppSecPlatform SecurityRed TeamingIncident ResponseSIEMAudit ReadinessMaterials EngineeringSemiconductor Chips

Categories

Security & SafetyTechnologySoftwareEngineeringData & Analytics

Source: eightfold