The official Website for Responsibly Building Generative Models Tutorial at ECCV 2024.
Date and Time 9/29, 2PM, Suite 7 Zoom link: https://asu.zoom.us/j/81540604207?pwd=UdeUUq7B6MvEgNOnTKJuSrOAfSBzTF.1 Pwd: eccv2024
Hosted by Gowthami Somepalli (UMD), Changhoon Kim (ASU), Tejas Gokhale (UMBC), Kyle Min (Intel Labs), Yezhou Yang (ASU) and Tom Goldstein (UMD)
Over the past few years, generative models have evolved from simple research concepts to production-ready tools, dramatically reshaping the tech landscape. Their outstanding generative capabilities have gained traction in various sectors, such as entertainment, art, journalism, and education. However, a closer look reveals that these models face several reliability issues that can impact their widespread adoption. A primary concern is the models’ ability to memorize training data, which might result in copyright breaches. Reliability concerns also encompass the model’s occasional failure to accurately follow prompts, inherent biases, misrepresentations, and hallucinations. Moreover, with increasing awareness, issues related to privacy and potential misuse underscore the urgent need to safeguard these models. To move forward responsibly with these models, we must adopt solutions to address memorization challenges, robust evaluation systems, and active fingerprinting solutions. These measures will help monitor the progress and ensure responsible and effective use of image-generative models.
In this tutorial, we will emphasize on the issues discussed above and the attendees will get an opportunity to learn about:
Time (UTC-10) | Topic | Presenter |
---|---|---|
10 min | Evolving Landscape of Generative Models: Progress and Pitfalls |
Gowthami Somepalli (Ph.D. Candidate, UMD) Changhoon Kim (Ph.D. Candidate, ASU) |
35+5 min | Instructing Generative Image Models on Fairness and Safety |
Patrick Schramowski (Researcher, DFKI) |
35+5 min | Attribution and Fingerprinting of Image Generative Models |
Changhoon Kim (PhD Candidate, ASU) |
30+5 min | Challenges with Evaluation of Text-to-Image Models |
Tejas Gokhale (Assistant Professor, UMBC) |
35+5 min | Interpretability and Responsibility in AI |
David Bau (Assistant Professor, NEU) |
35+5 min | Battle Against Deepfakes using Authenticity |
Ilke Demir (Sr. Staff Research Scientist, Intel Labs) |
30+5 min | Understanding training data memorization in diffusion models and ways to mitigate it |
Gowthami Somepalli (Ph.D. Candidate, UMD) |
This website will be updated closer to the event date.
The tutorial is supported by the University of Maryland Center for Machine Learning and the NSF Institute for Trustworthy AI in Law & Society.
Further support comes from Arizona State University through the NSF Robust Intelligence grant #2132724 and NSF SaTC project #2101052.