Secure and Robust AI Model Development (DAT945)
The course will be a mix of lectures and practical sessions in which students will develop Trustworthy AI applications using these methods.
Course description for study year 2024-2025
Course code
DAT945
Version
1
Credits (ECTS)
5
Semester tution start
Spring, Autumn
Number of semesters
1
Exam semester
Spring, Autumn
Language of instruction
English
Content
In an era dominated by Artificial Intelligence (AI), the concept of Trustworthy AI stands at the forefront of our technological landscape. Ensuring that AI systems are robust but also reliable and secure is paramount.
One of the problems in AI-based applications is adversarial machine learning attacks, in which an attacker deliberately alters the inputs to a machine learning model to cause the model to make incorrect predictions. These attacks can be challenging to detect and have serious consequences, such as causing a self-driving car to misidentify a stop sign. In this course, students will learn about different types of adversarial attacks and how to defend against them.
Another major challenge in AI models is uncertainty in the prediction time. Most AI models are based on probabilistic methods, which means that the model's output is not always deterministic. This can be a problem in applications where the AI model is used to make decisions, such as in a medical diagnosis. In this course, students will learn about methods for handling uncertainty in AI models.
The Explainability and interpretability of machine learning models are challenging for many reasons. First, many machine learning models are complex and opaque, making it difficult to understand how they work. Second, even when a model is understandable, it is often difficult to explain its decisions to a non-technical user. In this course, students will learn about methods for making machine learning models more explainable and interpretable.
Learning outcome
This course will explore methods for developing secure and robust AI-based applications. We will cover topics such as:
- Adversarial machine learning
- Uncertainty quantification, awareness
- Explainable artificial intelligence
- Interpretable machine learning
- Homomorphic Encryption
- Neural Cryptography
Knowledge
The student should after completing the course:
- Have an understanding of methods for developing secure and robust AI models
Skills
The student should after completing the course:
- Be able to apply mitigation methods to protect the AI models in a production environment from different types of adversarial attacks
- Be able to evaluate the impact of different types of adversarial attacks on the performance of AI models
- Be able to quantify and visualize the uncertainty in AI predictions
- Be able to use methods for making AI predictions more interpretable
Required prerequisite knowledge
Recommended prerequisites
Exam
Form of assessment | Weight | Duration | Marks | Aid |
---|---|---|---|---|
Project-report | 1/1 | Passed / Not Passed |
2 assignments (before and after the lectures) must be approved for the student to access the final project presentation. The assignments will be carried out individually.