Git Product home page Git Product logo

spectralnormalizationkeras's Introduction

方宜晟 / I-Sheng "Ethan" Fang/ Gî-Tshiânn Png

("I-Sheng" pronounce like "Ethan")

[portfolio] [github] [facebook] [instagram] [linkedin] [curriculum vitae] [google scholar]

iSheng's github stats Top Langs

Description

I am a Research Assistant in CITI at Academia Sinica, working with Dr. Jun-Cheng Chen. Before that, I received my Master degree in Robotics at National Yang Ming Chiao Tung University (the merger of National Chiao Tung University and National Yang Ming University) in January, 2023. My advisors are Prof. Yong-Sheng Chen and Prof. Wei-Chen (Walon) Chiu. I was a graduate student of Department of Computer Science in National Cheng Chi University, Taiwan, working with Prof. Yan-Tsung Peng . I was a research assistants of Enriched Vision Applications Lab, National Chiao Tung University from September 2018 to September 2019, working with Prof. Wei-Chen (Walon) Chiu. I received my Bachelor degree in Mathematical Science at National Chengchi University in January, 2018.

My research interests are in the area of generative model, self/wakly-supervised learning, depth estimation, style transfer, computer vision, and deep learning. I am also interested in their creative application, such as East Asian Ideograph font design, deepfake for education. I believe AI is a leverage, augmenting human ability not only in monotonous tasks but also in content creation.

My personal interests are Typography, Film Photography (check out my instagram @ishengfang), Baseball, Strength and Conditioning Training.

Publications

  • ES³Net: Accurate and Efficient Edge-Based Self-Supervised Stereo Matching Network

  • Single Image Reflection Removal based on Knowledge-distilling Content Disentanglement

    • Yan-Tsung Peng, Kai-Han Cheng, I-Sheng Fang, Wen-Yi Peng, Jr-Shian Wu
    • IEEE Signal Processing Letters(SPL) Feb. 2022
    • [IEEE Xplore][github]
  • Self-Contained Stylization via Steganography for Reverse and Serial Style Transfer

Projects

Contact

email(personal): [email protected], [email protected]

email(NCTU): [email protected]

email(NYCU): [email protected]

email(Academia Sinica): [email protected]

spectralnormalizationkeras's People

Contributors

ishengfang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

spectralnormalizationkeras's Issues

AttributeError: 'ConvSN2DTranspose' object has no attribute 'output_padding'

I import your ConvSN2DTranspose from SpectralNormalizationKeras in my code
but I met the error in line 609: if self.output_padding is None:
AttributeError: 'ConvSN2DTranspose' object has no attribute 'output_padding'
my keras version is 2.2.0
Have you met this problem?
Is it the issue of keras version?

TypeError: The added layer must be an instance of class Layer

Wanted to add SN layers as described, added SpectralNormalizationKeras.py to the respective dir. However, the layer could not be integrated. Here the respective parts of code

from SpectralNormalizationKeras import DenseSN, ConvSN2D

(...)

 def build_critic(self, spectral_normalization=True):

        model = Sequential()

        model.add(ConvSN2D(16, kernel_size=3, strides=2,kernel_initializer='glorot_uniform', input_shape=self.img_shape, padding="same"))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dropout(0.25))
        model.add(ConvSN2D(32, kernel_size=3, strides=2,kernel_initializer='glorot_uniform', padding="same"))
        model.add(ZeroPadding2D(padding=((0,1),(0,1))))
        model.add(BatchNormalization(momentum=0.8))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dropout(0.25))
        model.add(ConvSN2D(64, kernel_size=3, strides=2,kernel_initializer='glorot_uniform', padding="same"))
        model.add(BatchNormalization(momentum=0.8))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dropout(0.25))
        model.add(ConvSN2D(128, kernel_size=3, strides=1,kernel_initializer='glorot_uniform',padding="same"))
        model.add(BatchNormalization(momentum=0.8))
        model.add(LeakyReLU(alpha=0.2))
        model.add(Dropout(0.25))
        model.add(Flatten())
        model.add(DenseSN(1,kernel_initializer='glorot_uniform'))

        model.summary()

        img = Input(shape=self.img_shape)
        validity = model(img)

        return Model(img, validity)

Here the error call:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-24-da7bad03b7a7> in <module>
      1 if __name__ == '__main__':
----> 2     wgan = WGANGP()
      3     wgan.train(epochs=30001, batch_size=256, sample_interval=1500)

<ipython-input-23-9f58d066c64d> in __init__(self)
     27         # Build the generator and critic
     28         self.generator = self.build_generator()
---> 29         self.critic = self.build_critic()
     30 
     31         #-------------------------------

<ipython-input-23-9f58d066c64d> in build_critic(self, spectral_normalization)
    141         model = Sequential()
    142 
--> 143         model.add(ConvSN2D(16, kernel_size=3, strides=2,kernel_initializer='glorot_uniform', input_shape=self.img_shape, padding="same"))
    144         model.add(LeakyReLU(alpha=0.2))
    145         model.add(Dropout(0.25))

~\Anaconda3\envs\Tensorflow\lib\site-packages\tensorflow\python\keras\engine\sequential.py in add(self, layer)
    126       raise TypeError('The added layer must be '
    127                       'an instance of class Layer. '
--> 128                       'Found: ' + str(layer))
    129     self.built = False
    130     if not self._layers:

TypeError: The added layer must be an instance of class Layer. Found: <SpectralNormalizationKeras.ConvSN2D object at 0x000001BF340526D8>

Lots of thanks in advance for any suggestion of how to overcome this.

Problems in Tensorflow2

Dear,

Thank you very much for your work.
I want to use your work in the tensorflow2.5. There are some problems.

  1. The first is I remove the from keras.legacy import interfaces, which is not used in current keras version.(I don't see you use this function)
  2. There is no module name, InputSpec.
  3. When I import InputSpec by from tensorflow.keras.layers import InputSpec, there is a warning.
The following Variables were used a Lambda layer's call (tf.nn.bias_add), but
are not present in its tracked objects:
  <tf.Variable 'discriminator_ConvSN2D_1/bias:0' shape=(64,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.```

Looking forward to your reply. Thank you very much.

Best regards

Typos and training stuck while using gradient penalty

Hi guys,
i've downloaded you code and tried out. But I quickly figured out, that there are some typos. For instance:
line 137 in CIFAR_SNGAN.py - BATCH_SIZE instead of BATCHSIZE
,which leads me to the question if you ever tried the code with gradient penalty turned on.

Because after fixing this typo the training doesn't start at all and stuck on the estimating time - and doesn't move on even after 2 hours.

Thank you,
Vaclav

Normalization in each iteration?

Hello,

I was interested in using this code. But I found that in this code the spectral normalization happens only during building of the layer. When the Conv layers are called, it just uses the kernal and does not do any normalization. So After the first iteration, when conv layer is called, there won't be any normalization.

Am I missing something?

Thanks
Ansh

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.