laion-ai / clip-based-nsfw-detector Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Hello,
Looking at the definition of the ViT-H-14 variant, the model does not have a sigmoid activation at the output. Are we supposed to add this, or just clamp the value between 0 and 1? It's currently unconstrained.
https://github.com/LAION-AI/CLIP-based-NSFW-Detector/blob/main/h14_nsfw_model.py#L24
Commit is from @nousr, so perhaps you might now?
class Normalization(nn.Module):
def __init__(self, shape):
super().__init__()
self.register_buffer('mean', torch.zeros(shape))
self.register_buffer('variance', torch.ones(shape))
def forward(self, x):
return (x - self.mean) / self.variance.sqrt()
class NSFWModel(nn.Module):
def __init__(self):
super().__init__()
self.norm = Normalization([768])
self.linear_1 = nn.Linear(768, 64)
self.linear_2 = nn.Linear(64, 512)
self.linear_3 = nn.Linear(512, 256)
self.linear_4 = nn.Linear(256, 1)
self.act = nn.ReLU()
self.act_out = nn.Sigmoid()
def forward(self, x):
x = self.norm(x)
x = self.act(self.linear_1(x))
x = self.act(self.linear_2(x))
x = self.act(self.linear_3(x))
x = self.act_out(self.linear_4(x))
return x
clip_autokeras_binary_nsfw.zip
conversion notebook:
port_nsfw_to_pytorch.zip
from @crowsonkb
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-2-969c26cf53fe>](https://localhost:8080/#) in <module>
39 return loaded_model
40
---> 41 safety_model = load_safety_model()
42
43
2 frames
[<ipython-input-2-969c26cf53fe>](https://localhost:8080/#) in load_safety_model(clip_model)
35 zip_ref.extractall(cache_folder)
36
---> 37 loaded_model = load_model(model_dir, custom_objects=ak.CUSTOM_OBJECTS)
38
39 return loaded_model
[/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
[/usr/local/lib/python3.8/dist-packages/keras/saving/legacy/serialization.py](https://localhost:8080/#) in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name)
383 )
384 if cls is None:
--> 385 raise ValueError(
386 f"Unknown {printable_module_name}: '{class_name}'. "
387 "Please ensure you are using a `keras.utils.custom_object_scope` "
ValueError: Unknown optimizer: 'Custom>AdamWeightDecay'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
Maybe this one? NudeNet Classifier Dataset
Hi there,
When using this model, I took safe images as inputs but got opposite results. The code I constructed is almost the same as yours.
class SafetyClassifier:
def __init__(self, model_cache_dir=None):
self.model = self.load_safety_model(cache_folder=model_cache_dir)
def load_safety_model(self, cache_folder=None):
if cache_folder is None:
home = expanduser("~")
cache_folder = home + "/.cache/clip_retrieval"
model_dir = cache_folder + "/clip_autokeras_binary_nsfw"
if not os.path.exists(model_dir):
os.makedirs(cache_folder, exist_ok=True)
path_to_zip_file = cache_folder + "/clip_autokeras_binary_nsfw.zip"
url_model = (
"https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_binary_nsfw.zip"
)
urlretrieve(url_model, path_to_zip_file)
with zipfile.ZipFile(path_to_zip_file, "r") as zip_ref:
zip_ref.extractall(cache_folder)
loaded_model = load_model(model_dir, custom_objects=ak.CUSTOM_OBJECTS)
# print(loaded_model.predict(np.random.rand(10**3, 768).astype("float32"), batch_size=10**3))
return loaded_model
def __call__(self, clip_embs):
if isinstance(clip_embs, torch.Tensor):
clip_embs = clip_embs.cpu().numpy()
return self.model.predict_on_batch(clip_embs)
I encountered several warnings when inferring. My running environment: autokeras==1.0.19 and tensorflow==2.9.1. Did I miss something?
What is the safety_settings.yml used for?
The readme says 1=NSFW, but based on some experiments with the colab demo that appears to the the opposite of how it's implemented, I get close to 1 for clearly SFW images and close to 0 for clearly NSFW (well, nude but the philosophy is for another day) ones.
The README mentions that the manually annotated test set is here.
I took a look at the test set and it has only the image embeddings.
How do I find the labels of the images?
Every unsafe concept has a unique threshold. I assume these thresholds are learned based on some labelled dataset instead of manual determination right?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.