Diffusion Models for Image Generation

Diffusion Models for Image Generation

Draw an image of an ape riding on a horse”. A year ago, one had to order a custom request like this from an artist. Nowadays, however, Deep Learning enables generating arbitrary images from text input. A major component of such systems is the image generation process which is driven by diffusion models. In this seminar, we will start from the mathematical foundations, compare diffusion models to other concepts like GANs and explore its usage in papers like Dall-e2.

General information

Date: Tuesdays (14:15-15:45)

Location: 01.06.011

Lecturers: Prof. Dr. Laura Leal-Taixé and Andreas Roessler.

ECTS: 5

SWS: 2

Prerequisites

Preliminary information/pre-matching meeting

Please check out our pre-matching meeting presentation for more information about this course organization and pre-requisites before signing up for this class.

Course matching

Students are supposed to both

After the final matching is announced, we will send an email to all participants with further information. We will have space for up to 12 students.

Forum

We will use moodle for discussion and organizational topics. All participants will be signed up after the matching phase is over. External students please reach out to Andreas to figure something out.

Papers

We will propose a list of important papers leading up to stable diffusion in the first seminar. In addition, students are encourage to propose papers in this domain that that they are interested in and we will match those early in week 2.

Schedule

This part will be updated once student papers are assigned in week 2.


People