Git Product home page Git Product logo

tensorflow_engineering_implementation's Introduction

TensorFlow_Engineering_Implementation

The source code and dataset about <Deep Learning - Best Practices on TensorFlow Engineering Implementation>

#目录说明

code: 原书的配套代码源文件

tf2code:将原书的部分代码转化成适用与tf2.1版本的代码源文件

《深度学习之TensorFlow工程化项目实战》一书配套代码及数据集资源(数据集太大,请访问www.aianaconda.com进行下载)

这本AI技术工具书非常专注于实战应用。书中涵盖了TensorFlow 1.x到2.0版本的各种使用说明,及开发技巧。是代码医生工作室将近几年的经验积累。 本书全长740多页,共75个实例。从基础的静态图讲到动态图再到编译子图、从估算器讲到特征列再到TFTS、从原生的TF语法讲到tf.keras使用技巧、从tfRecorder的数据集制作讲到Dataset接口的使用、还有TF_serving、saved_model、TF_lite、cleverhans等更多接口的使用介绍。

一、本书的特点总结如下:

(1)应用场景相关

从应用场景方面,本书包含了:图像识别、微调模型、目标识别、基于像素级的语义分割、文本分类、特征工程、数值分析、时间序列数据分析、特征预处理、探索性数据分析、知识图谱、机器翻译、对话机器人、推荐系统、语音合成、生成文本、序列样本的生成、图像到文本的跨域生成、预测性维护任务、清晰图像的生成、多属性图像的合成、AI模型的攻击与防护、基于URL、安卓、IOS的上的布署。

(2)技术相关

从深度学习的技术方面,本书也包含了很多优秀模型,其中包括深度卷积、空洞卷积、胶囊卷积网络、矩阵胶囊、Mask R-CNN 、YOLO V3、PNASNet 、QRNN 、SRU、 IndRnn、IndyLSTM、 JANET等。同时也引用了大量的论文(30多篇),方便读者进行扩展阅读。

(3)与之前书籍的承接关系(技术更新,更丰富、内容更实战)

本书可以算作是《深度学习之TensorFlow:入门、原理与进阶实战》一书的后续版本。 知识一脉相承,前后呼应。在《深度学习之TensorFlow:入门、原理与进阶实战》一书的基础之上,又更新了一些有用的技巧,其中包括:

  • 更实用的数据集案例以及TFDS接口介绍
  • 动态图里、静态图、估算器间的相互转化及嵌套实例
  • TensorFlow1.x与TensorFlow2.x之间的关系,及转化实例
  • 特征工程相关知识,以及推荐系统、知识图谱配合深度学习的应用
  • 升级版的dropout(Targeted Dropout)
  • 各种注意力(多头注意力、BahdanauAttention 、LuongAttention、 单调注意力机制、混合注意力机制)
  • 各种归一化(ReNorm、LayerNorm、instance_norm、GroupNorm、SwitchableNorm)
  • 在RNN网络中添加多项式分布
  • Seq2Seq新框架更全面的介绍,和更底层的使用实例。
  • 基于AI模型的安全技术(FGSM、黑箱攻击等方法)
  • 图像合成、声音合成、安全领域的攻击样本合成相关实例
  • 增加了基于URL、安卓、IOS的上的布署实例

二、书中包含的知识点如下:

TF-slim、TF-Hub、T2T、tf.layers、tf.js、TFDS、tf.Keras、TFLearn、tfdbg、Training Hooks、Estimators、eager、TF_CONFIG、KubeFlow、tf.feature_column、知识图谱、sequence_feature_column、TFBT、 factorization、Lattice、tf.Transform、点阵校准模型、wals kmeans BoostedTrees、深度卷积、空洞卷积、深度可分离卷积、胶囊卷积网络、矩阵胶囊、TextCnn、ResNet、PNASNet、VGG、YOLO V3、Mask R-CNN、Targeted Dropout、QRNN 、SRU、 IndRnn、IndyLSTM、 JANET、 Seq2Seq、TFTS 、多项式分布、Tacotron、TFGan、多头注意力、BahdanauAttention 、LuongAttention、 单调注意力机制、混合注意力机制、stft、ReNorm、LayerNorm、instance_norm、GroupNorm、SwitchableNorm、FGSM、cleverhans 黑箱攻击 Jacobian矩阵、defun、TF_serving saved_model TF_lite。

同样该书仍然保持以前一贯的作风:

  • 组建qq群,由作者亲自解答问题
  • 在大蛇智能官网上同步勘误(www.aianaconda.com
  • 开放大蛇智能论坛,方便读者交流和查阅历史问题(bbs.aianaconda.com)
  • 开源图书配套的全部代码,与数据集(数据集太大,请访问www.aianaconda.com进行下载)
  • 在相约机器人公众号上,持续更新跟书籍知识相关的AI扩展技术。

Image text

tensorflow_engineering_implementation's People

Contributors

aianaconda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

tensorflow_engineering_implementation's Issues

Performance issues in your project (by P3)

Hello! I've found a performance issue in your project: batch() should be called before map(), which could make your program more efficient. Here is the tensorflow document to support it.

Detailed description is listed below:

  • /tf2code/Chapter6/code6-19/code6-19-TF2-Resnet.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter6/code6-19/code6-19-TF1.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter7/Code7-2/code7-2-TF1.py: dataset.batch(batch_size)(here) should be called before dataset.map(parse_csv, num_parallel_calls=5) (here).
  • /tf2code/Chapter7/Code7-2/code7-2-TF2.py: dataset.batch(batch_size)(here) should be called before dataset.map(parse_csv, num_parallel_calls=5)(here).
  • /tf2code/Chapter7/Code7-1/code7-1-TF1.py: dataset.batch(batch_size)(here) should be called before dataset.map(parse_csv, num_parallel_calls=5)(here).
  • /tf2code/Chapter7/Code7-1/code7-1-TF2.py: dataset.batch(batch_size)(here) should be called before dataset.map(parse_csv, num_parallel_calls=5)(here).
  • /tf2code/Chapter4/code4-13/code4-13-TF2.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter4/code4-10/code4-10-TF1.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter4/code4-10/code4-10-TF2 -TFa.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter4/code4-10/code4-10-TF2 - 副本.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter4/code4-10/code4-10-TF2(程序没问题框架有bug).py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter4/code4-12/code4-12-TF2.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter4/code4-12/code4-12-TF1.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /tf2code/Chapter10/code10.3/code10-4-mydataset-TF1.py: dataset.batch(batch_size,drop_remainder = drop_remainder)(here) should be called before dataset.map(map_func_, num_parallel_calls=num_threads)(here) and dataset.map(parse_func, num_parallel_calls=num_threads)(here).
  • /tf2code/Chapter10/code10.3/code10-4-mydataset-TF2.py: dataset.batch(batch_size,drop_remainder = drop_remainder)(here) should be called before dataset.map(map_func_, num_parallel_calls=num_threads)(here) and dataset.map(parse_func, num_parallel_calls=num_threads)(here).
  • /tf2code/Chapter10/code10.2/code-10-2-训练deblur-TF1.py: dataset.batch(batch_size) (here) should be called before dataset.map(_parseone)(here).
  • /tf2code/Chapter10/code10.2/code-10-3-使用deblur模型-TF1.py: dataset.batch(batch_size)(here) should be called before dataset.map(_parseone)(here).
  • /tf2code/Chapter10/code10.2/code-10-2-训练deblur-TF2.py: dataset.batch(batch_size)(here) should be called before dataset.map(_parseone)(here).
  • /tf2code/Chapter10/code10.2/code-10-3-使用deblur模型-TF2.py: dataset.batch(batch_size)(here) should be called before dataset.map(_parseone)(here).
  • /code/4-12 在动态图里读取Dataset数据集.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /code/4-11 将TFRecord文件制作成Dataset数据集.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here).
  • /code/10-2 训练deblur.py: dataset.batch(batch_size)(here) should be called before dataset.map(_parseone)(here).
  • /code/9-15 cn_dataset.py: dataset.batch(batch)(here) should be called before dataset.map(mymap)(here).
  • /code/6-19 用ResNet识别桔子和苹果.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /code/10-3 使用deblur模型.py: dataset.batch(batch_size)(here) should be called before dataset.map(_parseone)(here).
  • /code/9-3 利用Resnet进行样本预处理.py: .batch(batchsize)(here) should be called before .map(load_image)(here).
  • /code/10-4 mydataset.py: dataset.batch(batch_size,drop_remainder = drop_remainder)(here) should be called before dataset.map(map_func_, num_parallel_calls=num_threads)(here) and dataset.map(parse_func, num_parallel_calls=num_threads)(here).
  • /code/4-13 在动态图里读取Dataset数据集_tf2版.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /code/5-1 mydataset.py: dataset.batch(batch_size)(here) should be called before dataset.map(_parse_function, num_parallel_calls=num_workers)(here), dataset.map(training_preprocess, num_parallel_calls=num_workers)(here) and dataset.map(val_preprocess,num_parallel_calls=num_workers)(here).
  • /code/7-1 用wide and deep模型预测人口收入.py: dataset.batch(batch_size)(here) should be called before dataset.map(parse_csv, num_parallel_calls=5)(here).
  • /code/4-10 将图片文件制作成Dataset数据集.py: dataset.batch(batchsize)(here) should be called before dataset.map(_parseone)(here) and dataset.map(_random_rotated30)(here).
  • /code/7-2 用boosted_trees模型预测人口收入.py: dataset.batch(batch_size)(here) should be called before dataset.map(parse_csv, num_parallel_calls=5)(here).

Besides, you need to check the function called in map()(e.g., parse_csv called in dataset.map(parse_csv, num_parallel_calls=5)) whether to be affected or not to make the changed code work properly. For example, if parse_csv needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z).

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

'learning_rate:0' refers to a Tensor which does not exist

wide_and_deep预测用户收入的代码中,出现这个错误,查了google,没有找到合适的答案。请问这个是什么问题导致的呢?

KeyError: "The name 'learning_rate:0' refers to a Tensor which does not exist. The operation, 'learning_rate', does not exist in the graph."

Performance issues in tf2code/Chapter6/code6-19/code6-19-TF2-Resnet.py(P2)

Hello,I found a performance issue in the definition of dataset ,
tf2code/Chapter6/code6-19/code6-19-TF2-Resnet.py,
dataset = dataset.map(_parseone) was called without num_parallel_calls.
I think it will increase the efficiency of your program if you add this.

The same issues also exist in dataset = dataset.map(_random_rotated30) ,
dataset = dataset.map(_parseone),
dataset = dataset.map(_random_rotated30) and other 31 places.

Here is the documemtation of tensorflow to support this thing.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.