Git Product home page Git Product logo

ufcnn's People

Contributors

nmayorov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ufcnn's Issues

About resolution levels

dilation = 1 for w, b in zip(H_weights, H_biases): x = tf.nn.relu(conv(x, w, b, filter_length, dilation)) H_outputs.append(x) dilation *= 2

Doesn't this code make the first G conv layer G4 rather than G3? Since H3 already has dilation of 2**(3-1), the last line would make G3 has dilation of 2**(4-1). Although this doesn't affect the performance very much, since it's just one more resolution level.

some concern

In ufcnn.py +line271
C_biases = init_conv_weights([n_outputs], random_seed)
Is this right ? Seems to be:
C_biases = init_conv_bias([n_outputs], random_seed)
don't whether will cause errors.

construct_ufcnn with n_levels larger than one give error

Hi @nmayorov,
are you still working on this? construct_ufcnn give me this error if n_levels is different from 1.
I use python 2.7 and tensorflow: 1.1.0.

Thanks!
construct_ufcnn(n_inputs=1, n_outputs=2, n_levels=1, n_filters=10, filter_length=5, random_seed=0)
(<tf.Tensor 'Placeholder_8:0' shape=(?, ?, 1) dtype=float32>, <tf.Tensor 'Squeeze_4:0' shape=(?, ?, 2) dtype=float32>, [<tf.Variable 'Variable_60:0' shape=(1, 5, 1, 10) dtype=float32_ref>, <tf.Variable 'Variable_62:0' shape=(1, 5, 10, 10) dtype=float32_ref>, <tf.Variable 'Variable_64:0' shape=(1, 5, 10, 2) dtype=float32_ref>], [<tf.Variable 'Variable_61:0' shape=(10,) dtype=float32_ref>, <tf.Variable 'Variable_63:0' shape=(10,) dtype=float32_ref>, <tf.Variable 'Variable_65:0' shape=(2,) dtype=float32_ref>])

construct_ufcnn(n_inputs=1, n_outputs=2, n_levels=2, n_filters=10, filter_length=5, random_seed=0)
Traceback (most recent call last):
File "", line 1, in
File "ufcnn/ufcnn.py", line 264, in construct_ufcnn
x = tf.concat(3, [x_prev, x])
File "/Users/pastorea/miniconda2/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1029, in concat
dtype=dtypes.int32).get_shape(
File "/Users/pastorea/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 639, in convert_to_tensor
as_ref=False)
File "/Users/pastorea/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 704, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/Users/pastorea/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 113, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/Users/pastorea/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 102, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/Users/pastorea/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 370, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/Users/pastorea/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

Gradients are Zero on Ubuntu and Python 2.7

I found an issue on my implementation of your UFCNN that caused the gradients to be 0 for all but the output layer. I think this is related to Python handling the stddev as an integer math and rounding it to 0 most of the time:

def init_conv_weights(shape, seed):
n = np.prod(shape[:-1])
initial = tf.random_normal(shape, stddev=(2 / n)**0.5, seed=seed)
return tf.Variable(initial)

Changing the initial line from 2 to 2.0 as in this:

def init_conv_weights(shape, seed):
n = np.prod(shape[:-1])
initial = tf.random_normal(shape, stddev=(2.0 / n)**0.5, seed=seed)
return tf.Variable(initial)

There are probably a few ways to force the 2/n to be treated as floats, but this worked for me. The network now converged.

Thanks for the great work putting this together.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.