Git Product home page Git Product logo

clip-based-nsfw-detector's Introduction

CLIP-based-NSFW-Detector

This 2 class NSFW-detector is a lightweight Autokeras model that takes CLIP ViT L/14 embbedings as inputs. It estimates a value between 0 and 1 (1 = NSFW) and works well with embbedings from images.

DEMO-Colab: https://colab.research.google.com/drive/19Acr4grlk5oQws7BHTqNIK-80XGw2u8Z?usp=sharing

The training CLIP V L/14 embbedings can be downloaded here: https://drive.google.com/file/d/1yenil0R4GqmTOFQ_GVw__x61ofZ-OBcS/view?usp=sharing (not fully manually annotated so cannot be used as test)

The (manually annotated) test set is there https://github.com/LAION-AI/CLIP-based-NSFW-Detector/blob/main/nsfw_testset.zip

https://github.com/rom1504/embedding-reader/blob/main/examples/inference_example.py inference on laion5B

Example of use of the model:

@lru_cache(maxsize=None)
def load_safety_model(clip_model):
    """load the safety model"""
    import autokeras as ak  # pylint: disable=import-outside-toplevel
    from tensorflow.keras.models import load_model  # pylint: disable=import-outside-toplevel

    cache_folder = get_cache_folder(clip_model)

    if clip_model == "ViT-L/14":
        model_dir = cache_folder + "/clip_autokeras_binary_nsfw"
        dim = 768
    elif clip_model == "ViT-B/32":
        model_dir = cache_folder + "/clip_autokeras_nsfw_b32"
        dim = 512
    else:
        raise ValueError("Unknown clip model")
    if not os.path.exists(model_dir):
        os.makedirs(cache_folder, exist_ok=True)

        from urllib.request import urlretrieve  # pylint: disable=import-outside-toplevel

        path_to_zip_file = cache_folder + "/clip_autokeras_binary_nsfw.zip"
        if clip_model == "ViT-L/14":
            url_model = "https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_binary_nsfw.zip"
        elif clip_model == "ViT-B/32":
            url_model = (
                "https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_nsfw_b32.zip"
            )
        else:
            raise ValueError("Unknown model {}".format(clip_model))  # pylint: disable=consider-using-f-string
        urlretrieve(url_model, path_to_zip_file)
        import zipfile  # pylint: disable=import-outside-toplevel

        with zipfile.ZipFile(path_to_zip_file, "r") as zip_ref:
            zip_ref.extractall(cache_folder)

    loaded_model = load_model(model_dir, custom_objects=ak.CUSTOM_OBJECTS)
    loaded_model.predict(np.random.rand(10**3, dim).astype("float32"), batch_size=10**3)

    return loaded_model
    
    
nsfw_values = safety_model.predict(embeddings, batch_size=embeddings.shape[0])

This code and model is released under the MIT license:

Copyright 2022, Christoph Schuhmann

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

clip-based-nsfw-detector's People

Contributors

christophschuhmann avatar nousr avatar rom1504 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

clip-based-nsfw-detector's Issues

Wrong definition of NSFW p values in readme?

The readme says 1=NSFW, but based on some experiments with the colab demo that appears to the the opposite of how it's implemented, I get close to 1 for clearly SFW images and close to 0 for clearly NSFW (well, nude but the philosophy is for another day) ones.

How do you determine the thresholds???

Every unsafe concept has a unique threshold. I assume these thresholds are learned based on some labelled dataset instead of manual determination right?
Thanks!

Annotations for the NSFW test set?

The README mentions that the manually annotated test set is here.
I took a look at the test set and it has only the image embeddings.
How do I find the labels of the images?

torch version

class Normalization(nn.Module):
    def __init__(self, shape):
        super().__init__()
        self.register_buffer('mean', torch.zeros(shape))
        self.register_buffer('variance', torch.ones(shape))

    def forward(self, x):
        return (x - self.mean) / self.variance.sqrt()
    

class NSFWModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.norm = Normalization([768])
        self.linear_1 = nn.Linear(768, 64)
        self.linear_2 = nn.Linear(64, 512)
        self.linear_3 = nn.Linear(512, 256)
        self.linear_4 = nn.Linear(256, 1)
        self.act = nn.ReLU()
        self.act_out = nn.Sigmoid()

    def forward(self, x):
        x = self.norm(x)
        x = self.act(self.linear_1(x))
        x = self.act(self.linear_2(x))
        x = self.act(self.linear_3(x))
        x = self.act_out(self.linear_4(x))
        return x

clip_autokeras_binary_nsfw.zip

conversion notebook:
port_nsfw_to_pytorch.zip

from @crowsonkb

Colab demo notebook raises exception

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-2-969c26cf53fe>](https://localhost:8080/#) in <module>
     39     return loaded_model
     40 
---> 41 safety_model = load_safety_model()
     42 
     43 

2 frames
[<ipython-input-2-969c26cf53fe>](https://localhost:8080/#) in load_safety_model(clip_model)
     35             zip_ref.extractall(cache_folder)
     36 
---> 37     loaded_model = load_model(model_dir, custom_objects=ak.CUSTOM_OBJECTS)
     38 
     39     return loaded_model

[/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
     68             # To get the full stack trace, call:
     69             # `tf.debugging.disable_traceback_filtering()`
---> 70             raise e.with_traceback(filtered_tb) from None
     71         finally:
     72             del filtered_tb

[/usr/local/lib/python3.8/dist-packages/keras/saving/legacy/serialization.py](https://localhost:8080/#) in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name)
    383     )
    384     if cls is None:
--> 385         raise ValueError(
    386             f"Unknown {printable_module_name}: '{class_name}'. "
    387             "Please ensure you are using a `keras.utils.custom_object_scope` "

ValueError: Unknown optimizer: 'Custom>AdamWeightDecay'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.

Can't get the right results

Hi there,

When using this model, I took safe images as inputs but got opposite results. The code I constructed is almost the same as yours.

class SafetyClassifier:
    def __init__(self, model_cache_dir=None):
        self.model = self.load_safety_model(cache_folder=model_cache_dir)

    def load_safety_model(self, cache_folder=None):
        if cache_folder is None:
            home = expanduser("~")
            cache_folder = home + "/.cache/clip_retrieval"
        model_dir = cache_folder + "/clip_autokeras_binary_nsfw"
        if not os.path.exists(model_dir):
            os.makedirs(cache_folder, exist_ok=True)
            path_to_zip_file = cache_folder + "/clip_autokeras_binary_nsfw.zip"
            url_model = (
                "https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_binary_nsfw.zip"
            )
            urlretrieve(url_model, path_to_zip_file)
            with zipfile.ZipFile(path_to_zip_file, "r") as zip_ref:
                zip_ref.extractall(cache_folder)

        loaded_model = load_model(model_dir, custom_objects=ak.CUSTOM_OBJECTS)
        # print(loaded_model.predict(np.random.rand(10**3, 768).astype("float32"), batch_size=10**3))
        return loaded_model
    
    def __call__(self, clip_embs):
        if isinstance(clip_embs, torch.Tensor):
            clip_embs = clip_embs.cpu().numpy()
        return self.model.predict_on_batch(clip_embs)

I encountered several warnings when inferring. My running environment: autokeras==1.0.19 and tensorflow==2.9.1. Did I miss something?
image

add doc

  1. methodology for training set collection
  2. methodology for test set collection
  3. command to run the training

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.