Protect AI systems from evolving threats with hands-on, instructor-led training in AI Security.
These live courses teach how to defend machine learning models, counter adversarial attacks, and build trustworthy, resilient AI systems.
Training is available as online live training via remote desktop or onsite live training in opolskie, featuring interactive exercises and real-world use cases.
Onsite live training can be delivered at your location in opolskie or at a NobleProg corporate training center in opolskie.
Also known as Secure AI, ML Security, or Adversarial Machine Learning.
NobleProg – Your Local Training Provider
Opole
NobleProg classroom, Władysława Reymonta 29, Opole, poland, 46-020
NobleProg classroom in Opole is located at Władysława Reymonta 29 Street.
This instructor-led, live training in opolskie (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.By the end of this training, participants will be able to:
Identify and assess security risks in edge AI deployments.
Apply tamper resistance and encrypted inference techniques.
Harden edge-deployed models and secure data pipelines.
Implement threat mitigation strategies specific to embedded and constrained systems.
This instructor-led, live training in opolskie (online or onsite) is aimed at advanced-level professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.By the end of this training, participants will be able to:
Understand and compare key privacy-preserving techniques in ML.
Implement federated learning systems using open-source frameworks.
Apply differential privacy for safe data sharing and model training.
Use encryption and secure computation techniques to protect model inputs and outputs.
Artificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.By the end of this training, participants will be able to:
Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001.
Recognize cybersecurity threats targeting AI models and data pipelines.
Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
Interactive lecture and discussion of public sector use cases.
AI governance framework exercises and policy mapping.
Scenario-based threat modeling and risk evaluation.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in opolskie (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.By the end of this training, participants will be able to:
Simulate real-world threats to machine learning models.
Generate adversarial examples to test model robustness.
Assess the attack surface of AI APIs and pipelines.
Design red teaming strategies for AI deployment environments.
This instructor-led, live training in opolskie (online or onsite) is aimed at intermediate-level enterprise leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.By the end of this training, participants will be able to:
Understand the legal, ethical, and regulatory risks of using AI across departments.
Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
Establish security, auditing, and oversight policies for AI deployment in the enterprise.
Develop procurement and usage guidelines for third-party and in-house AI systems.
This instructor-led, live training in opolskie (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.By the end of this training, participants will be able to:
Understand the core vulnerabilities of LLM-based systems.
Apply secure design principles to LLM app architecture.
Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
This instructor-led, live training in opolskie (online or onsite) is aimed at intermediate-level machine learning and cybersecurity professionals who wish to understand and mitigate emerging threats against AI models, using both conceptual frameworks and hands-on defenses like robust training and differential privacy.By the end of this training, participants will be able to:
Identify and classify AI-specific threats such as adversarial attacks, inversion, and poisoning.
Use tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
Apply practical defenses including adversarial training, noise injection, and privacy-preserving techniques.
Design threat-aware model evaluation strategies in production environments.
This instructor-led, live training in opolskie (online or onsite) is aimed at beginner-level IT security, risk, and compliance professionals who wish to understand foundational AI security concepts, threat vectors, and global frameworks such as NIST AI RMF and ISO/IEC 42001.By the end of this training, participants will be able to:
Understand the unique security risks introduced by AI systems.
Identify threat vectors such as adversarial attacks, data poisoning, and model inversion.
Apply foundational governance models like the NIST AI Risk Management Framework.
Align AI use with emerging standards, compliance guidelines, and ethical principles.
Online Secure AI training in opolskie, Secure AI training courses in opolskie, Weekend Secure AI courses in opolskie, Evening Secure AI training in opolskie, AI Security instructor-led in opolskie, Secure AI on-site in opolskie, Online AI Security training in opolskie, Secure AI instructor-led in opolskie, Evening Secure AI courses in opolskie, Secure AI instructor in opolskie, Secure AI trainer in opolskie, Secure AI private courses in opolskie, AI Security boot camp in opolskie, AI Security coaching in opolskie, Weekend Secure AI training in opolskie, AI Security classes in opolskie, Secure AI one on one training in opolskie