Skip to content

Add safety module #213

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Aug 19, 2022
Merged

Add safety module #213

merged 13 commits into from
Aug 19, 2022

Conversation

patil-suraj
Copy link
Contributor

@patil-suraj patil-suraj commented Aug 18, 2022

This PR adds StableDiffusionSafetyChecker to filter out NSFW content in StableDiffusionPipeline.

The StableDiffusionSafetyChecker contains the clip vision model, vision projection. The nsfw concept embeddings are pre-computed and part of the state_dict. The module takes images and replaces nsfew image if detected with a black image.

It's added as a new required attribute in StableDiffusionPipeline so it'll always be loaded.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Aug 18, 2022

The documentation is not available anymore as the PR was closed or merged.

Comment on lines +70 to +71
path = module.__module__.split(".")
is_pipeline_module = pipeline_dir in path and hasattr(pipelines, pipeline_dir)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More general logic to detect if a module comes from pipeline module. For now this is only needed for LDMBert model and the safety checker.

This should probably be in a separate PR.

Comment on lines +153 to +155
safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(torch_device)
image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

safety_checker will replace nsfw images (if detected) in image with a black image.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! Looks very nice to me!

@pcuenca
Copy link
Member

pcuenca commented Aug 19, 2022

LGTM! I'll test it when I'm added to the fusing org.

@patrickvonplaten
Copy link
Contributor

LGTM! I'll test it when I'm added to the fusing org.

Added you :-)

@pcuenca
Copy link
Member

pcuenca commented Aug 19, 2022

Looks great! Just a couple of quick comments / questions:

  • It gives false positives for "A painting of a squirrel eating a burger". I tried a loop with random seeds (set using torch.manual_seed before invoking the pipeline) and it consistently said that nsfw content was detected. Perhaps there's some concept in the "be careful" list that's driving this result?
  • [Not important for release] How does it impact GPU memory requirements? It has been broadly announced that it runs on GPUs with at least 10 GB of memory -> I'll test in a 11 GB card. I tested in a 11 GB card and it works.
  • [Not important] Would it be possible to exclude the safety checker module from the list of modules, so printing the pipeline doesn't reveal it? Probably more trouble than it's worth.

@patrickvonplaten
Copy link
Contributor

Regarding false positives that's not great indeed @rromb @ablattmann @pesser have you noticed this as well - we should definitely add to the error message that if it's a false positive then the prompt should be re-run with a different seed

Copy link
Member

@anton-l anton-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Our tests are all based on v1-1, but I've checked manually with the safety checkpoint.

One nit: maybe let's add a logger.warning() about detected nsfw still, so that it's visible in a terminal/notebooks too.

The "rerun with a different seed" message can be included in the warning as well, and added to the gradio demo too.

@patil-suraj patil-suraj changed the title [WIP] Add safety module Add safety module Aug 19, 2022
@patil-suraj patil-suraj merged commit 65ea7d6 into main Aug 19, 2022
@patil-suraj patil-suraj deleted the add-safety-module branch August 19, 2022 09:54
@BIGJUN777
Copy link

Hi, there. Thank you for releasing the model.
When I tested your model on some images, it turn out that the model tended to output wrong results even though the input images were normal. There are some sample images below. Do you have any idea about this? Thanks.
203732405
206909818
206996515

@patrickvonplaten
Copy link
Contributor

Hey @BIGJUN777,

Could you please open a new issue for this?

@patrickvonplaten
Copy link
Contributor

It would be nice to have a reproducible code snippet as well

PhaneeshB pushed a commit to nod-ai/diffusers that referenced this pull request Mar 1, 2023
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* add SafetyChecker

* better name, fix checker

* add checker in main init

* remove from main init

* update logic to detect pipeline module

* style

* handle all safety logic in safety checker

* draw text

* can't draw

* small fixes

* treat special care as nsfw

* remove commented lines

* update safety checker
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants