Git Product home page Git Product logo

fully-convolutional-neural-network-fcn-for-semantic-segmentation-tensorflow-implementation's Introduction

Fully convolutional neural network (FCN) for semantic segmentation with tensorflow.

This is a simple implementation of a fully convolutional neural network (FCN). The net is based on fully convolutional neural net described in the paper Fully Convolutional Networks for Semantic Segmentation. The code is based on FCN implementation by Sarath Shekkizhar with MIT license but replaces the VGG19 encoder with VGG16 encoder. The net is initialized using the pre-trained VGG16 model by Marvin Teichmann. An improved version of this net in pytorch is given here

This is an old model for a New more accurate version of the net focused on recognition of materials in transparent vessels see this link

Details input/output

The input for the net is RGB image (Figure 1 right). The net produces pixel-wise annotation as a matrix in the size of the image with the value of each pixel corresponding to its class (Figure 1 left).

Figure 1) Semantic segmentation of image of liquid in glass vessel with FCN. Red=Glass, Blue=Liquid, White=Background

Requirements

This network was run with Python 3.6 Anaconda package and Tensorflow 1.1. The training was done using Nvidia GTX 1080, on Linux Ubuntu 16.04.

Setup

  1. Download the code from the repository.
  2. Download a pre-trained vgg16 net and put in the /Model_Zoo subfolder in the main code folder. A pre-trained vgg16 net can be download from here[https://drive.google.com/file/d/0B6njwynsu2hXZWcwX0FKTGJKRWs/view?usp=sharing] or from here [ftp://mi.eng.cam.ac.uk/pub/mttt2/models/vgg16.npy]

Tutorial

Instructions for training (in TRAIN.py):

In: TRAIN.py

  1. Set folder of the training images in Train_Image_Dir
  2. Set folder for the ground truth labels in Train_Label_DIR
  3. The Label Maps should be saved as png image with the same name as the corresponding image and png ending
  4. Download a pretrained vgg16 model and put in model_path (should be done automatically if you have internet connection)
  5. Set number of classes/labels in NUM_CLASSES
  6. If you are interested in using validation set during training, set UseValidationSet=True and the validation image folder to Valid_Image_Dir and set the folder with ground truth labels for the validation set in Valid_Label_Dir

Instructions for predicting pixelwise annotation using trained net (in Inference.py)

In: Inference.py

  1. Make sure you have trained model in logs_dir (See Train.py for creating trained model)
  2. Set the Image_Dir to the folder where the input images for prediction are located.
  3. Set the number of classes in NUM_CLASSES
  4. Set folder where you want the output annotated images to be saved to Pred_Dir
  5. Run script

Evaluating net performance using intersection over union (IOU):

In: Evaluate_Net_IOU.py

  1. Make sure you have trained model in logs_dir (See Train.py for creating trained model)
  2. Set the Image_Dir to the folder where the input images for prediction are located
  3. Set folder for ground truth labels in Label_DIR. The Label Maps should be saved as png image with the same name as the corresponding image and png ending
  4. Set number of classes number in NUM_CLASSES
  5. Run script

Supporting data sets

The net was tested on a dataset of annotated images of materials in glass vessels. This dataset can be downloaded from here

MIT Scene Parsing Benchmark with over 20k pixel-wise annotated images can also be used for training and can be download from here

Trained model

Glass and transparent vessel recognition trained model

Liquid Solid chemical phases recognition in transparent glassware trained model

For New more accurate version of the net focused on recognition of materials in transparent vessels see this link

fully-convolutional-neural-network-fcn-for-semantic-segmentation-tensorflow-implementation's People

Contributors

sagieppel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fully-convolutional-neural-network-fcn-for-semantic-segmentation-tensorflow-implementation's Issues

Loss Nan

My loss is Nan in all iterations. I am training on my own dataset. The labels are 0 for background and 1 for the ROI i am intending to segment. Any idea why? Thank you

getting error while running train.py file

I am getting following error

from ._conv import register_converters as _register_converters
npy file loaded
build model started
WARNING:tensorflow:From /home/dtrl/Downloads/FCN/Fully-convolutional-neural-network-FCN-for-semantic-segmentation-Tensorflow-implementation-master/BuildNetVgg16.py:110: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the axis argument instead
FCN model built
Setting up Saver...

Kernel died, restarting

Loss Nan

I am getting train loss nan at all steps
i'm training it on my own dataset but getting this from start of train.
image

Training problem

Excuse me, I have some questions to ask. Can your fcn model predict multiple categories from a single image?
For example, to train a model to recognize dogs and cats, I can manually label a single image to generate a mask image, and the image has two categories of dogs and cats at the same time, and I don't need to divide dogs and cats into two categories Training, is this possible?
Just like the prediction result of your cover, liquid and glass should not be trained separately, and his mask image should be a mixture of two types in the same mask image and not binary classification

Simplication of network for simpler problem

I have trained the architecture (FCN-8s) with your dataset (Vessels with four classes background, empty region, liquid, solid) and evaluated on your dataset. I didn't get a perfect result (for liquid & solid IoU < 5%). So I would like to know if you modified your architecture to make it adapt to your problem. If possible, could you share your modification with me?

Thanks,
Zizhao

TRIAN.PY ISSUES

Hello, I have a problem about this. My test pictures is a RGB three chanels,and Labels is PNG .

ValueError: could not broadcast input array from shape (456,607,4) into shape (456,607)

Augment=True fails

Hi and thanks for the excelent work you have done on yolov8.

I'm trying to understand the different between training, finetuning and transfer learning.
I want to use yolov8 as the vehicle in which I share this journey with others.

I can successfully train my model from scratch using yolov8 (both n and x) using both cli and python (preferred).
model = YOLO("yolov8n.pt")
model.train(data='Loki4.yaml', augment=False, epochs=20, imgsz=640, name='yolov8n_Loki4_20')

I can start the fine tuning by loading weights prior to traning.

model = YOLO("yolov8n.yaml").load("yolov8n.pt")
model.train(data='Loki4.yaml', augment=False, epochs=20, imgsz=640, name='yolov8n_Loki4_20')

However I cannot find a working syntax to conduct transfer learning (It does transfer the weights and redefines the classes).
Normally I would refactor the output layer(s) and freeze the model to preserve the feature set prior to training.
I found some yolov5 dialog on how to freeze layers and a recent call back function but not sure about where to stop freeze the model.
model.add_callback("on_train_start", freeze_layer) # suggested 10 layers

Looking at the github (ultralytics/ultralytics#189) you would freeze the backbone (P1 and P2) and modify the head. I found information there are 168 layers in yolov8n and 268 in yolov8x.You suggested to freeze only 3-5 layers, can you expand alittle to include both ends of the architecture.

Finally, I found an issue trying to implement augmentation. This is likely my use of syntax so I reverted to the coc128 dataset and the error persists.

(fiftyone) PS D:\datasets> python .\train_coco128TL.py
New https://pypi.org/project/ultralytics/8.0.118 available Update with 'pip install -U ultralytics'
Ultralytics YOLOv8.0.114 Python-3.9.16 torch-2.0.1+cu117 CUDA:0 (Quadro RTX 5000 with Max-Q Design, 16384MiB)
yolo\engine\trainer: task=detect, mode=train, model=yolov8n.pt, data=./coco128/coco128TL.yaml, epochs=3, patience=50, batch=8, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=yolov8n_coco128, exist_ok=False, pretrained=False, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=0, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=True, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs\detect\yolov8n_coco1285

               from  n    params  module                                       arguments

0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]
12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]
15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]
18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]
21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
22 [15, 18, 21] 1 897664 ultralytics.nn.modules.head.Detect [80, [64, 128, 256]]
Model summary: 225 layers, 3157200 parameters, 3157184 gradients

Transferred 355/355 items from pretrained weights
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n...
AMP: checks passed
train: Scanning D:\datasets\coco128\labels\train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 [00:00<?, ?it/s]
val: Scanning D:\datasets\coco128\labels\train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 [00:00<?, ?it/s]
Plotting labels to runs\detect\yolov8n_coco1285\labels.jpg...
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs\detect\yolov8n_coco1285
Starting training for 3 epochs...

  Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
    1/3      1.46G      1.206      1.793       1.27        127        640: 100%|██████████| 16/16 [00:02<00:00,  7.85it/s]
             Class     Images  Instances      Box(P          R      mAP50  mAP50-95):   0%|          | 0/8 [00:00<?, ?it/s]

Traceback (most recent call last):
File "D:\datasets\train_coco128TL.py", line 11, in
results = model.train(
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\model.py", line 371, in train
self.trainer.train()
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\trainer.py", line 192, in train
self._do_train(world_size)
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\trainer.py", line 370, in _do_train
self.metrics, self.fitness = self.validate()
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\trainer.py", line 476, in validate
metrics = self.validator(self)
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\validator.py", line 165, in call
self.loss += model.loss(batch, preds)[1]
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\nn\tasks.py", line 213, in loss
return self.criterion(self.predict(batch['img']) if preds is None else preds, batch)
File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\utils\loss.py", line 135, in call
pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split(
TypeError: 'NoneType' object is not iterable

I appended the same parameters from the end of this Loki4.yaml file to the coco128.yaml file :-)

# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov5
# └── datasets
#     └── coco128  ← downloads here (7 MB)
#
# Set data root in C:\Users\RICT\AppData\Roaming\Ultralytics\settings.yaml
#
#
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: Loki4           # dataset root dir
train: train/images  # train images (relative to 'path') 128 images
val: test/images     # val images (relative to 'path') 128 images
#test:               # test images (optional)

nc : 59

# Classes
names: ['test','person','person-running','person-sitting','person-lying','head','hat','shirt','pants','sunglasses','ambulance','police','metro-fire', 'country-fire','SES','bouy','ship','fishing-boat','jetty','stingray','bullshark', 'pontoon','dolphin','whale','seal','gallantry','helicopter','crowd','road-signs','truck','rubbish-truck','passenger-vehicle','sea-rescue','jet-boat','inflatable-toy','breakwater','yacht','jet-ski','irb','sled','bus','train','taxis','ferry','kayak','beach-sign','beach-flag','surfbaord','background','uav','mobile-crane','commercial-crane','slsc-vehicles','speed-boat','sail-boat','peir','tv','cone','atv']

# Hyperparameters ------------------------------------------------------------------------------------------------------
# lr0: 0.01  # initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
# lrf: 0.01  # final learning rate (lr0 * lrf)
# momentum: 0.937  # SGD momentum/Adam beta1
# weight_decay: 0.0005  # optimizer weight decay 5e-4
# warmup_epochs: 3.0  # warmup epochs (fractions ok)
# warmup_momentum: 0.8  # warmup initial momentum
# warmup_bias_lr: 0.1  # warmup initial bias lr
# box: 7.5  # box loss gain
# cls: 0.5  # cls loss gain (scale with pixels)
# dfl: 1.5  # dfl loss gain
# pose: 12.0  # pose loss gain
# kobj: 1.0  # keypoint obj loss gain
# label_smoothing: 0.0  # label smoothing (fraction)
# nbs: 64  # nominal batch size
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 5.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.2  # image scale (+/- gain)
shear: 0.05  # image shear (+/- deg) from -0.5 to 0.5
perspective: 0.1  # image perspective (+/- fraction), range 0-0.001
flipud: 0.3  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 0.3  # image mosaic (probability)
mixup: 0.1  # image mixup (probability)
# copy_paste: 0.0  # segment copy-paste (probability)

Doesn't predict correctly

Hi,

Thank you so much for sharing your code.
I have tried it.

The training and validation are run successfully. However, after run the Inference.py, the prediction result "/label" only shown black intensity in .png prediction image.

Please let me know where can be the problem?

Thanks

inference.py issues?

I trained your model at Anaconda and then use inference.py but failed. The errors is follwing:

runfile('C:/Users/myzhang/FCN/FCN_Tensorflow/Inference.py', wdir='C:/Users/myzhang/FCN/FCN_Tensorflow')
npy file loaded
build model started
WARNING:tensorflow:From C:\Users\myzhang\FCN\FCN_Tensorflow\BuildNetVgg16.py:110: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the axis argument instead
FCN model built
Setting up Saver...
INFO:tensorflow:Restoring parameters from logs/model.ckpt-100000
Traceback (most recent call last):

File "", line 1, in
runfile('C:/Users/myzhang/FCN/FCN_Tensorflow/Inference.py', wdir='C:/Users/myzhang/FCN/FCN_Tensorflow')

File "C:\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile
execfile(filename, namespace)

File "C:\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "C:/Users/myzhang/FCN/FCN_Tensorflow/Inference.py", line 81, in
main()#Run script

File "C:/Users/myzhang/FCN/FCN_Tensorflow/Inference.py", line 50, in main
saver.restore(sess, ckpt.model_checkpoint_path)

File "C:\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1666, in restore
{self.saver_def.filename_tensor_name: save_path})

File "C:\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
run_metadata_ptr)

File "C:\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)

File "C:\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
options, run_metadata)

File "C:\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)

InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1,4096,2] rhs shape= [1,1,4096,4]
[[Node: save/Assign_2 = Assign[T=DT_FLOAT, _class=["loc:@w8"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](W8, save/RestoreV2_2)]]

Caused by op 'save/Assign_2', defined at:
File "C:\anaconda3\lib\site-packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\anaconda3\lib\site-packages\spyder\utils\ipython\start_kernel.py", line 241, in main
kernel.start()
File "C:\anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 477, in start
ioloop.IOLoop.instance().start()
File "C:\anaconda3\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "C:\anaconda3\lib\site-packages\tornado\ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "C:\anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "C:\anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "C:\anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "C:\anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "C:\anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "C:\anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "C:\anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell
handler(stream, idents, msg)
File "C:\anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "C:\anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "C:\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2808, in run_ast_nodes
if self.run_code(code, result):
File "C:\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
runfile('C:/Users/myzhang/FCN/FCN_Tensorflow/Inference.py', wdir='C:/Users/myzhang/FCN/FCN_Tensorflow')
File "C:\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile
execfile(filename, namespace)
File "C:\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/myzhang/FCN/FCN_Tensorflow/Inference.py", line 81, in
main()#Run script
File "C:/Users/myzhang/FCN/FCN_Tensorflow/Inference.py", line 45, in main
saver = tf.train.Saver()
File "C:\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1218, in init
self.build()
File "C:\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1227, in build
self._build(self._filename, build_save=True, build_restore=True)
File "C:\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1263, in _build
build_save=build_save, build_restore=build_restore)
File "C:\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 751, in _build_internal
restore_sequentially, reshape)
File "C:\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 439, in _AddRestoreOps
assign_ops.append(saveable.restore(tensors, shapes))
File "C:\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 160, in restore
self.op.get_shape().is_fully_defined())
File "C:\anaconda3\lib\site-packages\tensorflow\python\ops\state_ops.py", line 276, in assign
validate_shape=validate_shape)
File "C:\anaconda3\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 56, in assign
use_locking=use_locking, name=name)
File "C:\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
op_def=op_def)
File "C:\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1,4096,2] rhs shape= [1,1,4096,4]
[[Node: save/Assign_2 = Assign[T=DT_FLOAT, _class=["loc:@w8"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](W8, save/RestoreV2_2)]]

Cannot run inference.py to predict the test set

Hello,
Thanks for sharing the code. I'm a very beginner in coding and deep learning thing. It helps me a lot.

I got a problem.
After I trained the model, I tried running inference.py to predict the test set I got the error as below.

runfile('C:/Users/HP/Desktop/FCN/Inference.py', wdir='C:/Users/HP/Desktop/FCN')
D:\Users\HP\Anaconda3\lib\site-packages\h5py_init_.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
npy file loaded
build model started
WARNING:tensorflow:From C:\Users\HP\Desktop\FCN\BuildNetVgg16.py:110: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the axis argument instead
FCN model built
Setting up Saver...
INFO:tensorflow:Restoring parameters from logs/model.ckpt-1000
Model restored...
Running Predictions:
Saving output to:Output_Prediction/
Start Predicting 22 images
0.0%
Traceback (most recent call last):

File "", line 1, in
runfile('C:/Users/HP/Desktop/FCN/Inference.py', wdir='C:/Users/HP/Desktop/FCN')

File "D:\Users\HP\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)

File "D:\Users\HP\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "C:/Users/HP/Desktop/FCN/Inference.py", line 81, in
main()#Run script

File "C:/Users/HP/Desktop/FCN/Inference.py", line 76, in main
LabelPred = sess.run(Net.Pred, feed_dict={image: Images, keep_prob: 1.0})

File "D:\Users\HP\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 895, in run
run_metadata_ptr)

File "D:\Users\HP\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1097, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)

File "D:\Users\HP\Anaconda3\lib\site-packages\numpy\core\numeric.py", line 492, in asarray
return array(a, dtype, copy=False, order=order)

ValueError: could not broadcast input array from shape (227,227,3) into shape (1,227,227)

Please help. Thank you so much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.