Coursera
AI Security: Security in the Age of Artificial Intelligence Specialization

Gain next-level skills with Coursera Plus for $199 (regularly $399). Save now.

Coursera

AI Security: Security in the Age of Artificial Intelligence Specialization

Build Secure AI Systems End-to-End. Learn to identify, prevent, and respond to AI-specific threats across the entire ML lifecycle.

Reza Moradinezhad
Starweaver
Ritesh Vajariya

Instructors: Reza Moradinezhad

Included with Coursera Plus

Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Secure AI systems using static analysis, threat modeling, and vulnerability assessment techniques

  • Implement production security controls including monitoring, incident response, and patch management

  • Conduct red-teaming exercises and build resilient defenses against AI-specific attack vectors

Details to know

Shareable certificate

Add to your LinkedIn profile

Taught in English
Recently updated!

January 2026

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

Advance your subject-matter expertise

  • Learn in-demand skills from university and industry experts
  • Master a subject or tool with hands-on projects
  • Develop a deep understanding of key concepts
  • Earn a career certificate from Coursera

Specialization - 13 course series

What you'll learn

  • Configure Bandit, Semgrep, PyLint to detect AI vulnerabilities: insecure model deserialization, hardcoded secrets, unsafe system calls in ML code.

  • Apply static analysis to fix AI vulnerabilities (pickle exploits, input validation, dependencies); create custom rules for AI security patterns.

  • Implement pip-audit, Safety, Snyk for dependency scanning; assess AI libraries for vulnerabilities, license compliance, and supply chain security.

Skills you'll gain

Category: Dependency Analysis
Category: Vulnerability Scanning
Category: Analysis
Category: Supply Chain
Category: AI Personalization
Category: DevSecOps
Category: AI Security
Category: Threat Modeling
Category: Vulnerability Assessments
Category: Secure Coding
Category: Continuous Integration
Category: Open Source Technology
Category: PyTorch (Machine Learning Library)
Category: Program Implementation
Category: MLOps (Machine Learning Operations)
Category: Application Security

What you'll learn

  • Analyze and evaluate AI inference threat models, identifying attack vectors and vulnerabilities in machine learning systems.

  • Design and implement comprehensive security test cases for AI systems including unit tests, integration tests, and adversarial robustness testing.

  • Integrate AI security testing into CI/CD pipelines for continuous security validation and monitoring of production deployments.

Skills you'll gain

Category: Security Testing
Category: Threat Modeling
Category: DevOps
Category: Prompt Engineering
Category: CI/CD
Category: AI Security
Category: Continuous Integration
Category: MITRE ATT&CK Framework
Category: Secure Coding
Category: Integration Testing
Category: DevSecOps
Category: MLOps (Machine Learning Operations)
Category: System Monitoring
Category: Test Case
Category: Application Security
Category: Threat Detection
Category: Continuous Monitoring
Category: Unit Testing
Category: Scripting

What you'll learn

  • Analyze inference bottlenecks to identify optimization opportunities in production ML systems.

  • Implement model pruning techniques to reduce computational complexity while maintaining acceptable accuracy.

  • Apply quantization methods and benchmark trade-offs for secure and efficient model deployment.

Skills you'll gain

Category: Process Optimization
Category: Model Evaluation
Category: Convolutional Neural Networks
Category: Project Performance
Category: Network Performance Management
Category: Benchmarking
Category: Model Deployment
Category: Network Model
Category: Keras (Neural Network Library)
Category: Cloud Deployment

What you'll learn

  • Apply infrastructure hardening in ML environments using secure setup, IAM controls, patching, and container scans to protect data.

  • Secure ML CI/CD workflows through automated dependency scanning, build validation, and code signing to prevent supply chain risks.

  • Design resilient ML pipelines by integrating rollback, drift monitoring, and adaptive recovery to maintain reliability and system trust.

Skills you'll gain

Category: CI/CD
Category: Identity and Access Management
Category: Responsible AI
Category: Vulnerability Scanning
Category: Continuous Monitoring
Category: Model Deployment
Category: Hardening
Category: DevSecOps
Category: Containerization
Category: Compliance Management
Category: MLOps (Machine Learning Operations)
Category: Infrastructure Security
Category: Model Evaluation
Category: AI Personalization
Category: Security Controls
Category: Resilience
Category: Threat Modeling
Category: Vulnerability Assessments
Category: Engineering
Category: AI Security

What you'll learn

  • Execute secure deployment strategies (blue/green, canary, shadow) with traffic controls, health gates, and rollback plans.

  • Implement model registry governance (versioning, lineage, stage transitions, approvals) to enforce provenance and promote-to-prod workflows.

  • Design monitoring triggering runbooks; secure updates via signing + CI/CD policy for auditable releases and controlled rollback.

Skills you'll gain

Category: CI/CD
Category: Artificial Intelligence and Machine Learning (AI/ML)
Category: Model Deployment
Category: AI Security
Category: Cloud Deployment
Category: MLOps (Machine Learning Operations)
Category: Data-Driven Decision-Making
Category: DevOps

What you'll learn

  • Analyze and identify a range of security vulnerabilities in complex AI models, including evasion, data poisoning, and model extraction attacks.

  • Apply defense mechanisms like adversarial training and differential privacy to protect AI systems from known threats.

  • Evaluate the effectiveness of security measures by designing and executing simulated adversarial attacks to test the resilience of defended AI model.

Skills you'll gain

Category: Vulnerability Assessments
Category: Data Integrity
Category: Threat Modeling
Category: Generative Adversarial Networks (GANs)
Category: AI Security
Category: Security Engineering
Category: Model Evaluation
Category: Security Strategy
Category: Cyber Threat Hunting
Category: Responsible AI
Category: Information Privacy
Category: Design
Category: Analysis
Category: Security Testing
Category: Data Validation

What you'll learn

  • Analyze real-world AI security, privacy, and access control risks to understand how these manifest in their own organizations.

  • Design technical controls and governance frameworks to secure AI systems, guided by free tools and industry guidelines.

  • Assess privacy laws' impact on AI, draft compliant policies, and tackle compliance challenges.

Skills you'll gain

Category: Threat Modeling
Category: Data Security
Category: Data Governance
Category: Information Privacy
Category: Generative AI
Category: AI Security
Category: Incident Response
Category: Data Loss Prevention
Category: Risk Management Framework
Category: Identity and Access Management
Category: Personally Identifiable Information
Category: Governance
Category: Cyber Security Policies
Category: Security Controls
Category: Security Awareness
Category: Responsible AI

What you'll learn

  • Design red-teaming scenarios to identify vulnerabilities and attack vectors in large language models using structured adversarial testing.

  • Implement content-safety filters to detect and mitigate harmful outputs while maintaining model performance and user experience.

  • Evaluate and enhance LLM resilience by analyzing adversarial inputs and developing defense strategies to strengthen overall AI system security.

Skills you'll gain

Category: Large Language Modeling
Category: Responsible AI
Category: AI Personalization
Category: System Implementation
Category: Security Strategy
Category: LLM Application
Category: Cyber Security Assessment
Category: AI Security
Category: Security Controls
Category: Prompt Engineering
Category: Vulnerability Scanning
Category: Continuous Monitoring
Category: Threat Modeling
Category: Vulnerability Assessments
Category: Penetration Testing
Category: Security Testing
Category: Scenario Testing

What you'll learn

  • Identify and classify various classes of attacks targeting AI systems.

  • Analyze the AI/ML development lifecycle to pinpoint stages vulnerable to attack.

  • Apply threat mitigation strategies and security controls to protect AI systems in development and production.

Skills you'll gain

Category: MLOps (Machine Learning Operations)
Category: Security Engineering
Category: Cybersecurity
Category: Application Lifecycle Management
Category: Threat Detection
Category: Security Controls
Category: Data Security
Category: Application Security
Category: MITRE ATT&CK Framework
Category: Model Deployment
Category: Responsible AI
Category: Threat Modeling
Category: Artificial Intelligence and Machine Learning (AI/ML)
Category: AI Security
Category: Vulnerability Assessments

What you'll learn

  • Apply machine learning techniques to detect anomalies in cybersecurity data such as logs, network traffic, and user behavior.

  • Automate incident response workflows by integrating AI-driven alerts with security orchestration tools.

  • Evaluate and fine-tune AI models to reduce false positives and improve real-time threat detection accuracy.

Skills you'll gain

Category: Process Optimization
Category: Data Analysis
Category: Time Series Analysis and Forecasting
Category: Site Reliability Engineering
Category: Generative AI
Category: User Feedback
Category: Application Performance Management
Category: Data Integration
Category: Anomaly Detection
Category: Microsoft Azure
Category: Scalability
Category: Query Languages

What you'll learn

  • Apply systematic patching strategies to AI models, ML frameworks, and dependencies while maintaining service availability and model performance.

  • Conduct blameless post-mortems for AI incidents using structured frameworks to identify root causes, document lessons learned, and prevent recurrence

  • Set up monitoring, alerts, and recovery to detect and resolve model drift, performance drops, and failures early.

Skills you'll gain

Category: Artificial Intelligence
Category: Continuous Monitoring
Category: Problem Management
Category: Sprint Retrospectives
Category: Model Deployment
Category: AI Security
Category: System Monitoring
Category: MLOps (Machine Learning Operations)
Category: Vulnerability Assessments
Category: Disaster Recovery
Category: DevOps
Category: Dashboard
Category: Patch Management
Category: Dependency Analysis
Category: Automation
Category: Incident Management
Category: Site Reliability Engineering

What you'll learn

  • Explain the fundamentals of deploying AI models on mobile applications, including their unique performance, privacy, and security considerations.

  • Analyze threats to mobile AI models like reverse engineering, adversarial attacks, and privacy leaks and their effect on reliability and trust.

  • Design a layered defense strategy for securing mobile AI applications by integrating encryption, obfuscation, and continuous telemetry monitoring.

Skills you'll gain

Category: Encryption
Category: Mobile Security
Category: Continuous Monitoring
Category: Application Security
Category: Security Management
Category: Threat Management
Category: Information Privacy
Category: Program Implementation
Category: AI Security
Category: Security Requirements Analysis
Category: Model Deployment
Category: Apple iOS
Category: Threat Modeling
Category: Mobile Development
Category: System Monitoring

What you'll learn

  • Analyze how AI features like sensors, models, and agents make phones attack surfaces and enable deepfake-based scams.

  • Evaluate technical attack paths—zero-permission inference and multi-layer agent attacks—using real research cases.

  • Design a mobile-focused detection and response plan with simple rules, containment steps, and key resilience controls.

Skills you'll gain

Category: Incident Response
Category: Mobile Security
Category: Hardening
Category: Exploit development
Category: Information Privacy
Category: Deep Learning
Category: Threat Detection
Category: Threat Modeling
Category: Artificial Intelligence
Category: Security Controls
Category: Mobile Development Tools
Category: Prompt Engineering
Category: AI Security
Category: Endpoint Security

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.

Instructors

Reza Moradinezhad
Coursera
6 Courses3,991 learners
Starweaver
Coursera
501 Courses923,811 learners
Ritesh Vajariya
Coursera
23 Courses11,407 learners

Offered by

Coursera

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

Frequently asked questions