rbgm

Responsibly Building Generative Models @ ECCV 2024 (Accepted)


The official Website for Responsibly Building Generative Models Tutorial at ECCV 2024, TBD

Hosted by Gowthami Somepalli (UMD), Changhoon Kim (ASU), Tejas Gokhale (UMBC), Kyle Min (Intel Labs), Yezhou Yang (ASU) and Tom Goldstein (UMD)

Agenda

Over the past few years, generative models have evolved from simple research concepts to production-ready tools, dramatically reshaping the tech landscape. Their outstanding generative capabilities have gained traction in various sectors, such as entertainment, art, journalism, and education. However, a closer look reveals that these models face several reliability issues that can impact their widespread adoption. A primary concern is the models’ ability to memorize training data, which might result in copyright breaches. Reliability concerns also encompass the model’s occasional failure to accurately follow prompts, inherent biases, misrepresentations, and hallucinations. Moreover, with increasing awareness, issues related to privacy and potential misuse underscore the urgent need to safeguard these models. To move forward responsibly with these models, we must adopt solutions to address memorization challenges, robust evaluation systems, and active fingerprinting solutions. These measures will help monitor the progress and ensure responsible and effective use of image-generative models.

In this tutorial, we will emphasize on the issues discussed above and the attendees will get an opportunity to learn about:

Tentative Schedule

Time (UTC-10) Topic Presenter
10 min Evolving Landscape of Generative Models: Progress and Pitfalls Gowthami Somepalli (Ph.D. Candidate, UMD)
Changhoon Kim (Ph.D. Candidate, ASU)
30+5 min Understanding training data memorization in diffusion models and ways to mitigate it Gowthami Somepalli
(Ph.D. Candidate, UMD)
30+5 min Erasing Concepts in Diffusion Models David Bau
(Assistant Professor, NEU)
30+5 min Instructing Generative Image Models on Fairness and Safety Patrick Schramowski
(Researcher, DFKI)
30+5 min Deepfake Detection Ilke Demir
(Sr. Staff Research Scientist, Intel Labs)
40+5 min Attribution and Fingerprinting of Image Generative Models Changhoon Kim
(PhD Candidate, ASU)
40+5 min Challenges with Evaluation of Text-to-Image Models Tejas Gokhale
(Assistant Professor, UMBC)

This website will be updated closer to the event date.

The tutorial receives support from the University of Maryland Center for Machine Learning and the NSF Institute for Trustworthy AI in Law & Society. 
Further support comes from Arizona State University through the NSF Robust Intelligence grant #2132724 and NSF SaTC project #2101052.