Coursera

AI Security: Security in the Age of Artificial Intelligence Specialization

Coursera

AI Security: Security in the Age of Artificial Intelligence Specialization

Build Secure AI Systems End-to-End. Learn to identify, prevent, and respond to AI-specific threats across the entire ML lifecycle.

Reza Moradinezhad
Starweaver
Ritesh Vajariya

Instructors: Reza Moradinezhad

Included with Coursera Plus

Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Secure AI systems using static analysis, threat modeling, and vulnerability assessment techniques

  • Implement production security controls including monitoring, incident response, and patch management

  • Conduct red-teaming exercises and build resilient defenses against AI-specific attack vectors

Details to know

Shareable certificate

Add to your LinkedIn profile

Taught in English
Recently updated!

January 2026

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

Advance your subject-matter expertise

  • Learn in-demand skills from university and industry experts
  • Master a subject or tool with hands-on projects
  • Develop a deep understanding of key concepts
  • Earn a career certificate from Coursera

Specialization - 13 course series

What you'll learn

  • Configure Bandit, Semgrep, PyLint to detect AI vulnerabilities: insecure model deserialization, hardcoded secrets, unsafe system calls in ML code.

  • Apply static analysis to fix AI vulnerabilities (pickle exploits, input validation, dependencies); create custom rules for AI security patterns.

  • Implement pip-audit, Safety, Snyk for dependency scanning; assess AI libraries for vulnerabilities, license compliance, and supply chain security.

Skills you'll gain

Category: Dependency Analysis
Category: AI Security
Category: Vulnerability Scanning
Category: Responsible AI
Category: Open Web Application Security Project (OWASP)
Category: Continuous Integration
Category: Supply Chain
Category: Program Implementation
Category: PyTorch (Machine Learning Library)
Category: AI Personalization
Category: Secure Coding
Category: DevSecOps
Category: Threat Modeling
Category: Analysis
Category: Code Review
Category: MLOps (Machine Learning Operations)
Category: Application Security

What you'll learn

  • Analyze and evaluate AI inference threat models, identifying attack vectors and vulnerabilities in machine learning systems.

  • Design and implement comprehensive security test cases for AI systems including unit tests, integration tests, and adversarial robustness testing.

  • Integrate AI security testing into CI/CD pipelines for continuous security validation and monitoring of production deployments.

Skills you'll gain

Category: Threat Modeling
Category: AI Security
Category: Security Testing
Category: Prompt Engineering
Category: Test Case
Category: Integration Testing
Category: Unit Testing
Category: CI/CD
Category: Secure Coding
Category: DevSecOps
Category: MITRE ATT&CK Framework
Category: Threat Detection
Category: MLOps (Machine Learning Operations)
Category: Application Security
Category: System Monitoring
Category: DevOps
Category: Scripting
Category: Continuous Integration
Category: Continuous Monitoring

What you'll learn

  • Analyze inference bottlenecks to identify optimization opportunities in production ML systems.

  • Implement model pruning techniques to reduce computational complexity while maintaining acceptable accuracy.

  • Apply quantization methods and benchmark trade-offs for secure and efficient model deployment.

Skills you'll gain

Category: Model Deployment
Category: Cloud Deployment
Category: Model Evaluation
Category: Benchmarking
Category: Network Model
Category: Network Performance Management
Category: Process Optimization
Category: Convolutional Neural Networks
Category: Project Performance
Category: Keras (Neural Network Library)

What you'll learn

  • Apply infrastructure hardening in ML environments using secure setup, IAM controls, patching, and container scans to protect data.

  • Secure ML CI/CD workflows through automated dependency scanning, build validation, and code signing to prevent supply chain risks.

  • Design resilient ML pipelines by integrating rollback, drift monitoring, and adaptive recovery to maintain reliability and system trust.

Skills you'll gain

Category: AI Security
Category: CI/CD
Category: Compliance Management
Category: Infrastructure Security
Category: Identity and Access Management
Category: Resilience
Category: Model Deployment
Category: Continuous Monitoring
Category: Containerization
Category: Vulnerability Scanning
Category: Model Evaluation
Category: MLOps (Machine Learning Operations)
Category: DevSecOps
Category: Threat Modeling
Category: Security Controls
Category: Vulnerability Assessments
Category: AI Personalization
Category: Responsible AI
Category: Engineering
Category: Hardening

What you'll learn

  • Execute secure deployment strategies (blue/green, canary, shadow) with traffic controls, health gates, and rollback plans.

  • Implement model registry governance (versioning, lineage, stage transitions, approvals) to enforce provenance and promote-to-prod workflows.

  • Design monitoring triggering runbooks; secure updates via signing + CI/CD policy for auditable releases and controlled rollback.

Skills you'll gain

Category: AI Security
Category: Model Deployment
Category: Data-Driven Decision-Making
Category: DevOps
Category: Artificial Intelligence and Machine Learning (AI/ML)
Category: CI/CD
Category: Software Versioning
Category: MLOps (Machine Learning Operations)
Category: System Monitoring
Category: Cloud Deployment

What you'll learn

  • Analyze and identify a range of security vulnerabilities in complex AI models, including evasion, data poisoning, and model extraction attacks.

  • Apply defense mechanisms like adversarial training and differential privacy to protect AI systems from known threats.

  • Evaluate the effectiveness of security measures by designing and executing simulated adversarial attacks to test the resilience of defended AI model.

What you'll learn

  • Analyze real-world AI security, privacy, and access control risks to understand how these manifest in their own organizations.

  • Design technical controls and governance frameworks to secure AI systems, guided by free tools and industry guidelines.

  • Assess privacy laws' impact on AI, draft compliant policies, and tackle compliance challenges.

Skills you'll gain

Category: Responsible AI
Category: AI Security
Category: Data Security
Category: Information Privacy
Category: Cyber Security Policies
Category: Security Controls
Category: Cyber Governance
Category: Role-Based Access Control (RBAC)
Category: Authorization (Computing)
Category: Data Ethics
Category: Personally Identifiable Information
Category: Governance
Category: Security Awareness
Category: Threat Management
Category: Risk Management Framework
Category: Data Loss Prevention
Category: Incident Response
Category: Generative AI
Category: General Data Protection Regulation (GDPR)
Category: Security Management

What you'll learn

  • Design red-teaming scenarios to identify vulnerabilities and attack vectors in large language models using structured adversarial testing.

  • Implement content-safety filters to detect and mitigate harmful outputs while maintaining model performance and user experience.

  • Evaluate and enhance LLM resilience by analyzing adversarial inputs and developing defense strategies to strengthen overall AI system security.

Skills you'll gain

Category: Security Testing
Category: Large Language Modeling
Category: AI Security
Category: AI Personalization
Category: Threat Modeling
Category: Scenario Testing
Category: Prompt Engineering
Category: LLM Application
Category: Vulnerability Assessments
Category: System Implementation
Category: Penetration Testing
Category: Responsible AI
Category: Security Controls
Category: Cyber Security Assessment
Category: Continuous Monitoring
Category: Vulnerability Scanning
Category: Security Strategy

What you'll learn

  • Identify and classify various classes of attacks targeting AI systems.

  • Analyze the AI/ML development lifecycle to pinpoint stages vulnerable to attack.

  • Apply threat mitigation strategies and security controls to protect AI systems in development and production.

Skills you'll gain

Category: AI Security
Category: MITRE ATT&CK Framework
Category: Cybersecurity
Category: Responsible AI
Category: Threat Modeling
Category: Security Controls
Category: Data Security
Category: Application Lifecycle Management
Category: Artificial Intelligence and Machine Learning (AI/ML)
Category: Threat Detection
Category: Secure Coding
Category: Vulnerability Assessments
Category: Model Deployment
Category: Security Testing
Category: MLOps (Machine Learning Operations)

What you'll learn

  • Apply machine learning techniques to detect anomalies in cybersecurity data such as logs, network traffic, and user behavior.

  • Automate incident response workflows by integrating AI-driven alerts with security orchestration tools.

  • Evaluate and fine-tune AI models to reduce false positives and improve real-time threat detection accuracy.

Skills you'll gain

Category: Anomaly Detection
Category: Application Performance Management
Category: Microsoft Azure
Category: Process Optimization
Category: Time Series Analysis and Forecasting
Category: Event Monitoring
Category: Data Analysis
Category: Data Integration
Category: Query Languages
Category: User Feedback
Category: Generative AI

What you'll learn

  • Apply systematic patching strategies to AI models, ML frameworks, and dependencies while maintaining service availability and model performance.

  • Conduct blameless post-mortems for AI incidents using structured frameworks to identify root causes, document lessons learned, and prevent recurrence

  • Set up monitoring, alerts, and recovery to detect and resolve model drift, performance drops, and failures early.

Skills you'll gain

Category: Dependency Analysis
Category: Patch Management
Category: Application Deployment
Category: Incident Response
Category: Continuous Monitoring
Category: Computer Security Incident Management
Category: Automation
Category: Anomaly Detection
Category: Model Deployment
Category: System Monitoring
Category: Problem Management
Category: Responsible AI
Category: Incident Management
Category: Dashboard
Category: Site Reliability Engineering
Category: AI Security
Category: MLOps (Machine Learning Operations)
Category: Disaster Recovery

What you'll learn

  • Explain the fundamentals of deploying AI models on mobile applications, including their unique performance, privacy, and security considerations.

  • Analyze threats to mobile AI models like reverse engineering, adversarial attacks, and privacy leaks and their effect on reliability and trust.

  • Design a layered defense strategy for securing mobile AI applications by integrating encryption, obfuscation, and continuous telemetry monitoring.

Skills you'll gain

Category: Continuous Monitoring
Category: AI Security
Category: Encryption
Category: Threat Management
Category: Information Privacy
Category: System Monitoring
Category: Program Implementation
Category: Mobile Security
Category: Mobile Development
Category: Model Deployment
Category: Application Security
Category: Security Requirements Analysis
Category: Apple iOS
Category: Threat Modeling
Category: Security Management

What you'll learn

  • Analyze how AI features like sensors, models, and agents make phones attack surfaces and enable deepfake-based scams.

  • Evaluate technical attack paths—zero-permission inference and multi-layer agent attacks—using real research cases.

  • Design a mobile-focused detection and response plan with simple rules, containment steps, and key resilience controls.

Skills you'll gain

Category: Mobile Security
Category: Incident Response
Category: Artificial Intelligence
Category: Hardening
Category: Endpoint Security
Category: Threat Management
Category: AI Security
Category: Deep Learning
Category: Security Controls
Category: Threat Modeling
Category: Information Privacy
Category: Mobile Development Tools
Category: Threat Detection
Category: Exploit development

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.

Instructors

Reza Moradinezhad
Coursera
6 Courses 4,267 learners
Starweaver
Coursera
539 Courses 984,534 learners
Ritesh Vajariya
Coursera
27 Courses 14,848 learners

Offered by

Coursera

You might also like

Why people choose Coursera for their career

Felipe M.

Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."

Jennifer J.

Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."

Larry W.

Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."

Chaitanya A.

"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."
Coursera Plus

Open new doors with Coursera Plus

Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions