Description
Is your feature request related to a problem? Please describe.
See this PR comment: #815 (comment)
TLDR: with recent changes in #815, developers have the ability to disable the safety checker. Currently, the only options available to devs is to either have the safety checker or not have it at all. While this is useful, many applications of NSFW content require opt in access from end users. For example, consider the Reddit NSFW model -- the end user is shown a 'nsfw' overlay that they have to manually click through. Currently, the diffusers library does not make it easy to support such a use case.
Describe the solution you'd like
I think the best approach is to add a flag to the SafetyChecker class called black_out_images
. This flag would then modify the if statement on this line:
for idx, has_nsfw_concept in enumerate(has_nsfw_concepts):
if has_nsfw_concept and black_out_images:
images[idx] = np.zeros(images[idx].shape) # black image
The flag would then be passed into the SafetyChecker from the top level pipeline config.
Describe alternatives you've considered
Another alternative is to do this at the pipeline level. For example, we could pass in a flag to the Pipeline class called black_out_nsfw_images
. This flag would then modify the safety_checker call here:
safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
cleaned_image, has_nsfw_concept = self.safety_checker(
images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
)
if black_out_nsfw_images:
image = cleaned_image
Additional context
In both cases, I believe the config can default to 'nsfw images will be blacked out'. Having the option is critical, however.