Git Product home page Git Product logo

dagan's Introduction

GitHub last commit (branch) Supported TF Version Documentation Status Build Status Downloads Downloads Docker Pulls Codacy Badge

Please click TensorLayerX 🔥🔥🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass tutorials and applications. TensorLayer is awarded the 2017 Best Open Source Software by the ACM Multimedia Society. This project can also be found at OpenI and Gitee.

News

  • 🔥 TensorLayerX is a Unified Deep Learning and Reinforcement Learning Framework for All Hardwares, Backends and OS. The current version supports TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow and Jittor as the backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend.
  • TensorLayer is now in OpenI
  • Reinforcement Learning Zoo: Low-level APIs for professional usage, High-level APIs for simple usage, and a corresponding Springer textbook
  • Sipeed Maxi-EMC: Run TensorLayer models on the low-cost AI chip (e.g., K210) (Alpha Version)

Design Features

TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.

  • Simplicity : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive examples.
  • Flexibility : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
  • Zero-cost Abstraction : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).

TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic, making it easy to learn while being flexible enough to cope with complex AI tasks. TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University, Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.

Multilingual Documents

TensorLayer has extensive documentation for both beginners and professionals. The documentation is available in both English and Chinese.

English Documentation Chinese Documentation Chinese Book

If you want to try the experimental features on the the master branch, you can find the latest document here.

Extensive Examples

You can find a large collection of examples that use TensorLayer in here and the following space:

Getting Start

TensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required.

Install TensorFlow:

pip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU (version 2.0 RC1)
pip3 install tensorflow # CPU version

Install the stable release of TensorLayer:

pip3 install tensorlayer

Install the unstable development version of TensorLayer:

pip3 install git+https://github.com/tensorlayer/tensorlayer.git

If you want to install the additional dependencies, you can also run

pip3 install --upgrade tensorlayer[all]              # all additional dependencies
pip3 install --upgrade tensorlayer[extra]            # only the `extra` dependencies
pip3 install --upgrade tensorlayer[contrib_loggers]  # only the `contrib_loggers` dependencies

If you are TensorFlow 1.X users, you can use TensorLayer 1.11.0:

# For last stable version of TensorLayer 1.X
pip3 install --upgrade tensorlayer==1.11.0

Performance Benchmark

The following table shows the training speeds of VGG16 using TensorLayer and native TensorFlow on a TITAN Xp.

Mode Lib Data Format Max GPU Memory Usage(MB) Max CPU Memory Usage(MB) Avg CPU Memory Usage(MB) Runtime (sec)
AutoGraph TensorFlow 2.0 channel last 11833 2161 2136 74
TensorLayer 2.0 channel last 11833 2187 2169 76
Graph Keras channel last 8677 2580 2576 101
Eager TensorFlow 2.0 channel last 8723 2052 2024 97
TensorLayer 2.0 channel last 8723 2010 2007 95

Getting Involved

Please read the Contributor Guideline before submitting your PRs.

We suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayer in the following slack channel.



Citing TensorLayer

If you find TensorLayer useful for your project, please cite the following papers:

@article{tensorlayer2017,
    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
    journal = {ACM Multimedia},
    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
    url     = {http://tensorlayer.org},
    year    = {2017}
}

@inproceedings{tensorlayer2021,
  title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
  author={Lai, Cheng and Han, Jiarong and Dong, Hao},
  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  pages={1--3},
  year={2021},
  organization={IEEE}
}

dagan's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dagan's Issues

how to calculate psnr in your paper?

After reading your paper and codes, i'm strange about the calculation of PSNR. I use some mri data to test the ZF reconstruction and try to calculate the NMSE in your codes, it seems right. But when I calculate the PSNR based on your NMSE or RMSE defined by myself, it seems wrong. What's more, don't you think the PSNR is too high via ZF reconstruction with 20% data kept, because when PSNR reaches 34dB, pictures may seem very similar.

up7 layer's output is not equal to input.

Hello, May I ask a problem?
Thank you for your code, I have been learn AI from about 3 weeks ago.
During training, It occur the error above like title.
Help me, please.

up7 = {DeConv2d} Last layer is: DeConv2d (u_net/deconv7) [25, 2, 2, 512]
all_drop = {dict} {}
all_layers = {list} <class 'list'>: [<tf.Tensor 'u_net/conv1/Identity:0' shape=(25, 1, 1, 64) dtype=float32>, <tf.Tensor 'u_net/conv2/Identity:0' shape=(25, 1, 1, 128) dtype=float32>, <tf.Tensor 'u_net/bn2/lrelu:0' shape=(25, 1, 1, 128) dtype=float32>, <tf.Tensor 'u_net/conv3/Identity:0' shape=(25, 1, 1, 256) dtype=float32>, <tf.Tensor 'u_net/bn3/lrelu:0' shape=(25, 1, 1, 256) dtype=float32>, <tf.Tensor 'u_net/conv4/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn4/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv5/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn5/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv6/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn6/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv7/Identity:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/bn7/lrelu:0' shape=(25, 1, 1, 512) dtype=float32>, <tf.Tensor 'u_net/conv8/lrelu:0' shape=(25, 1, 1, 51...
all_params = {list} <class 'list'>: [<tf.Variable 'u_net/conv1/kernel:0' shape=(4, 4, 1, 64) dtype=float32_ref>, <tf.Variable 'u_net/conv1/bias:0' shape=(64,) dtype=float32_ref>, <tf.Variable 'u_net/conv2/kernel:0' shape=(4, 4, 64, 128) dtype=float32_ref>, <tf.Variable 'u_net/conv2/bias:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/beta:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/gamma:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/moving_mean:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/bn2/moving_variance:0' shape=(128,) dtype=float32_ref>, <tf.Variable 'u_net/conv3/kernel:0' shape=(4, 4, 128, 256) dtype=float32_ref>, <tf.Variable 'u_net/conv3/bias:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/beta:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/gamma:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/moving_mean:0' shape=(256,) dtype=float32_ref>, <tf.Variable 'u_net/bn3/moving_variance:0' shape=(256,) dtype=float3...

  • inputs = {Tensor} Tensor("u_net/conv8/lrelu:0", shape=(25, 1, 1, 512), dtype=float32)
  • name = {str} 'u_net/deconv7'
  • outputs = {Tensor} Tensor("u_net/deconv7/Identity:0", shape=(25, 2, 2, 512), dtype=float32)
    w_init = {TruncatedNormal} <tensorflow.python.ops.init_ops.TruncatedNormal object at 0x7fd754f26358>
    x = {Tensor} Tensor("bad_image:0", shape=(25, 1, 1, 1), dtype=float32)

FHD(1920x1080) size image

Hello. Your paper is very helpful and thankful.
However, I want to learn a bigger size image. I need advice on how to do it.

Can you tell me how to configure the network?

For reference, I am studying to eliminate aliasing of Aliasing images with Aliasing and Ground truth images.

Error while loading data_loader.py

while loading data_loader.py it showing up dimension error in " img_2d = np.transpose(img_2d, (1, 0))" . Also while trying to do after reducing number of data set for fast training it shows up error
"Traceback (most recent call last):
File "sef_data_loader.py", line 94, in
X_train = X_train[:, :, :, np.newaxis]
IndexError: too many indices for array
"

Questions about the weight for pixel loss

Hello,

I noticed that you use 15 as the weight for pixel loss, which is much larger than other weights such as for perceptual loss, frequency loss and also generator loss. If such a large coefficient is used, the influence of the discriminator will be reduced and the generator might become a direct estimator approximately. I would like to know if mode collapse can happen when the large weight for pixel loss is used.

BTW, I implemented conditional WGAN for MRI without pixel loss, but the image quality is not good enough.

Thanks!

Some questions about the vgg_prepro function in utils.py files

Issue Description

Hello,
when I read your code, I have some questions about the vgg_prepro functions in utils.py
This function is defined as follows:

def vgg_prepro(x):
    x = imresize(x, [244, 244], interp='bilinear', mode=None)
    x = np.tile(x, 3)
    x = x / 127.5 - 1
    return x

I know this function is to preprocess an image, such as to Changing an image from [256,256,1] to [244,244,3], but I don't understand why do we need to use x/127.5 - 1 , here we have already to scale the image to [-1,1].

In addition, I would also like to ask why we cut the size of 244 instead of 224 which is the size of the initial VGG paper.

I was hoping you could help me with this problem, I can't get it right.

How to calculate the reconstruction time

Hi, @nebulaV, I run your code and reconstruct one image, I use the following code during evaluate time,

    start_time = time.time()
    evaluate_restore_img = sess.run(net.outputs, {evaluate_image: evaluate_samples_bad})
    print("took: %4.4fs" % (time.time() - start_time)) 

I run on the GPU platform, but the time is about 16s, but your code is 5.4ms , I don not understand how to calculate that results.

name 'tl' is not defined

Hello,

I had come accross your work and was very excited to run it. Unfortunately I keep getting an error message "name 'tl' is not defined", this occurs when I run train.py here https://github.com/tensorlayer/DAGAN/blob/master/train.py#L475, I ran this according to the README. It seems that this t1 object has many inbuilt functions so doesnt look like I can get away with commenting all the t1 lines. Can you please help ?

Updated Version of Code

Hi Do you have a updated version of the code?

Since Tensorflow 1.0 is too old and I am suffering from a layer building problem.

Thanks,
Junyu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.