Stable Diffusion ‘Memorizes’ Some Images, Sparking Privacy Concerns, can be made to throw out training images
On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data Read more about Stable Diffusion ‘Memorizes’ Some Images, Sparking Privacy Concerns, can be made to throw out training images[…]