Git Product home page Git Product logo

satellite-segmentation's Introduction

Satellite-Segmentation

This is a satellite remote sensing segmentation project wirtten by Keras based on SegNet and U-Net.

main ideas

  1. segmented by SegNet
  2. segmented by U-Net
  3. model emsamble: SegNet + U-Net

other ideas

  1. GAN pix2pix: generate some fake satellite images to enlarge the dataset
  2. DeepLab
  3. Mask RCNN
  4. FCN
  5. RefineNet
  6. post-processing: CRF
图片说明
图片说明 图片说明

4月2日更新

我上传了我预处理后的数据集,一份是专门给segnet训练,一份是给unet训练的(只上传了buildings的数据集),所以如果不想自己处理原始数据的话,可以下载我的预处理后的数据跑跑效果看看。建议先跑SegNet效果再跑Unet效果。

预处理后的数据集:

链接:https://pan.baidu.com/s/1FwHkvp2esvhyOx1eSZfkog 密码:fqnw

下载之后可以看到里面有三个文件夹,分别是用于测试的图片,用于unet训练的图片(里面是src和label文件夹),用于segnet的图片(里面是src和label文件夹)。对于segnet训练集我已经切割好了,但是unet的还没切割,所以需要执行该文件生成unet训练集:

python ./unet/gen_dataset.py

在执行之前需要先在该文件里面图片读取路径修改为我上传的unet训练集路径,输出路径也要修改一下。

怎么跑SegNet?

可以先在segnet_train.py里修改filepath ,改成segnet训练集的路径,然后 训练:

python segnet_train.py --model segnet.h5

--model后面接的是训练之后得到的模型名字

预测:待预测的图片的路径在segnet_predict.py里面修改

python segnet_predict.py

怎么跑Unet?

训练:

python unet_train.py --model unet_buildings20.h5 --data ./unet_train/buildings/

--model后面接的是训练之后得到的模型名字,--data后面接的是unet的训练集路径

预测:unet_predict.py里面改预测图片的所在路径

python unet_predict.py

怎么做label可视化?

  1. 有朋友反映原始数据集里的训练集有些图片全黑,这是因为这些图片是十六位的!比赛方就是这么折腾人,所以一般图片浏览器无法显示这些16位图,解决方法: 深度16位的图片转8位:matlab下:im2 = uint8(im1);

  2. label怎么都是黑色的啊?因为每类的标签的值都是1到5啊,像素1~5当然是黑色啊!想看看标签长什么样的解决方法:参考介个文件:

https://github.com/AstarLight/Satellite-Segmentation/blob/master/draw_lables.cpp

这里我用cpp做了可视化,当然用Python写也是不难的。可视化之后,你也会发现赛会方在又给我们设置第二坑了~

original dataset download:

链接:https://pan.baidu.com/s/1i6oMukH

密码:yqj2

Please visit my blog for more details: http://www.cnblogs.com/skyfsm/p/8330882.html

satellite-segmentation's People

Contributors

astarlight avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

satellite-segmentation's Issues

程序训练结束之后出现了‘function’ object has no attribute 'called',类似这样的错误警告,想问一下有人知道问题是出在哪吗?

训练的时候并没有出什么问题,但是在训练过程结束之后出现了下面这些提示:
Epoch 30/30
703/703 [==============================] - 1410s 2s/step - loss: 0.2347 - acc: 0.9036 - val_loss: 1.0093 - val_acc: 0.7139
Traceback (most recent call last):

File "", line 1, in
get_ipython().run_line_magic('run', 'segnet_train.py -m segnet.h5')

File "/opt/Anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2287, in run_line_magic
result = fn(*args,**kwargs)

File "", line 2, in run

File "/opt/Anaconda3/lib/python3.7/site-packages/IPython/core/magic.py", line 187, in
call = lambda f, *a, **k: f(*a, **k)

File "/opt/Anaconda3/lib/python3.7/site-packages/IPython/core/magics/execution.py", line 807, in run
run()

File "/opt/Anaconda3/lib/python3.7/site-packages/IPython/core/magics/execution.py", line 793, in run
exit_ignore=exit_ignore)

File "/opt/Anaconda3/lib/python3.7/site-packages/IPython/core/pylabtools.py", line 180, in mpl_execfile
if plt.draw_if_interactive.called:

AttributeError: 'function' object has no attribute 'called'

我真的是看不出来是哪里的问题……有人知道吗?

U-Net只能一次训练一个类别吗?

如果一次训练一个类别,那么label数据可是包含5个类别,是不是需要处理一下呢?

unet_train.py我改动了一下

`classes_dict={1:'VEGETATION',2:'BUILDING',3:'WATER',4:'ROAD'}

def generateData(batch_size,classtype,data=[]):
while True:
train_data = []
train_label = []
batch = 0
for i in (range(len(data))):
url = data[i]
batch += 1
img = load_img(filepath + 'src/' + url)
img = img_to_array(img)
train_data.append(img)
label = load_img(filepath + 'label/' + url, grayscale=True)
################################################################
for key in classes_dict.keys():
if(key != classtype):
mask=(label==key)
label[mask]=0
################################################################
label = img_to_array(label)
train_label.append(label)
if batch % batch_size==0:
train_data = np.array(train_data)
train_label = np.array(train_label)
yield (train_data,train_label)
train_data = []
train_label = []
batch = 0

def generateValidData(batch_size,classtype,data=[]):
while True:
valid_data = []
valid_label = []
batch = 0
for i in (range(len(data))):
url = data[i]
batch += 1
img = load_img(filepath + 'src/' + url)
img = img_to_array(img)
valid_data.append(img)
label = load_img(filepath + 'label/' + url, grayscale=True)
################################################################
for key in classes_dict.keys():
if(key != classtype):
mask=(label==key)
label[mask]=0
################################################################
label = img_to_array(label)
valid_label.append(label)
if batch % batch_size==0:
valid_data = np.array(valid_data)
valid_label = np.array(valid_label)
yield (valid_data,valid_label)
valid_data = []
valid_label = []
batch = 0
`

seg_net在第二次池化提示错误

ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,128,128].
楼主该怎么处理??

U-Net的网络似乎没有在学习,难道是因为学习率及其他超参数设定的问题么

训练U-Net的时候似乎参数有问题,网络应该没有在学习,请问大家遇到了同样的情况么?怎么解决的?
703/703 [==============================] - 449s - loss: -4.4077 - acc: 0.3267 - val_loss: -4.3856 - val_acc: 0.3299
Epoch 2/30
703/703 [==============================] - 444s - loss: -4.4410 - acc: 0.3268 - val_loss: -4.3856 - val_acc: 0.3299
Epoch 3/30
703/703 [==============================] - 445s - loss: -4.4410 - acc: 0.3268 - val_loss: -4.3856 - val_acc: 0.3299
Epoch 4/30
703/703 [==============================] - 444s - loss: -4.4410 - acc: 0.3268 - val_loss: -4.3856 - val_acc: 0.3299

迭代了30次,数值未曾变化过

change the dataset and train

When I changed my own data set and executed the training code segnet_train.py, I encountered the following error:
Traceback (most recent call last):
File "segnet_train.py", line 284, in
train(args)
File "segnet_train.py", line 247, in train
validation_data=generateValidData(BS,val_set), steps_per_epoch=train_numb//BS, max_queue_size=1)
File "/home/gnss/anaconda3/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/gnss/anaconda3/lib/python3.5/site-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/home/gnss/anaconda3/lib/python3.5/site-packages/keras/engine/training_generator.py", line 181, in fit_generator
generator_output = next(output_generator)
File "/home/gnss/anaconda3/lib/python3.5/site-packages/keras/utils/data_utils.py", line 709, in get
six.reraise(*sys.exc_info())
File "/home/gnss/anaconda3/lib/python3.5/site-packages/six.py", line 693, in reraise
raise value
File "/home/gnss/anaconda3/lib/python3.5/site-packages/keras/utils/data_utils.py", line 685, in get
inputs = self.queue.get(block=True).get()
File "/home/gnss/anaconda3/lib/python3.5/multiprocessing/pool.py", line 608, in get
raise self._value
File "/home/gnss/anaconda3/lib/python3.5/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/gnss/anaconda3/lib/python3.5/site-packages/keras/utils/data_utils.py", line 626, in next_sample
return six.next(_SHARED_SEQUENCES[uid])
File "segnet_train.py", line 108, in generateData
train_label = labelencoder.transform(train_label)
File "/home/gnss/anaconda3/lib/python3.5/site-packages/sklearn/preprocessing/label.py", line 257, in transform
_, y = encode(y, uniques=self.classes, encode=True)
File "/home/gnss/anaconda3/lib/python3.5/site-packages/sklearn/preprocessing/label.py", line 110, in _encode
return _encode_numpy(values, uniques, encode)
File "/home/gnss/anaconda3/lib/python3.5/site-packages/sklearn/preprocessing/label.py", line 53, in _encode_numpy
% str(diff))
ValueError: y contains previously unseen labels: [7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 15.0]
This seems to be a problem with the label? Thank you for helping me.

Some questions about data.py

Will the 1.png, 2.png to 5.png refer to the original 5 remote sensing images in the gen_dataset.py? But there are no five images in the downloaded data. What's the matter?Thank you for your reply

> 咦,为什么我的结果全是0呢,你有没有遇到这种情况呀,或者你改动了程序的什么地方呢。我都被困扰了好多天了

咦,为什么我的结果全是0呢,你有没有遇到这种情况呀,或者你改动了程序的什么地方呢。我都被困扰了好多天了

------------------ 原始邮件 ------------------ 发件人: "GusRoth"[email protected]; 发送时间: 2019年3月6日(星期三) 下午5:25 收件人: "AstarLight/Satellite-Segmentation"[email protected]; 抄送: "cup_zyx"[email protected]; "Comment"[email protected]; 主题: Re: [AstarLight/Satellite-Segmentation] use unet_predict.py formulti-classification (#27) @GusRoth 你好,请问你用U_net跑出结果了吗? 跑出来了,不过结果看上去并没有作者的那么好,有很大的误差,不知道哪里出了问题 — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

你是用的作者的数据集么?我用作者的数据集出现了我上边描述的结果,没有做什么更改,只是改掉了我上面提问的那个错误,就是一个channel_first的问题。就通了。
然后你说的全零的情况我遇到过,我跑自己的数据的时候就是这样子。
开始是因为我要预测的图给错了,后来换成正确的,还处理了一下数据集,虽然结果很差,但是没有全零的情况了。

Originally posted by @GusRoth in #27 (comment)

usage: unet_train.py [-h] [-d DATA] -m MODEL [-p PLOT] Unet_train.py: error: the following arguments are required: -m/--model,

Excuse me, what should the sequence of programs run, I have made a mistake now:
usage: unet_train.py [-h] [-d DATA] -m MODEL [-p PLOT]
Unet_train.py: error: the following arguments are required: -m/--model,
However, the models under my .keras contain the downloaded weight model, and I also redefined the address in the parameters of ModelCheckpoint(), but I still get an error, why, I hope to get your reply, thank you

多次epoch,val_acc都是0.4705没有进一步提高, 同时测试的结果mask输出都是1

@james Lee, hi,非常感谢你代码,学习了很多。经历了一点小挫折,最终还是跑通代码。作为简单的实验,我用gen_dataset.py 生成了2000个训练数据对,,然后精简了u-net的网络
Layer (type) Output Shape Param # Connected to

input_2 (InputLayer) (None, 3, 256, 256) 0


conv2d_12 (Conv2D) (None, 32, 256, 256) 896 input_2[0][0]


max_pooling2d_3 (MaxPooling2D) (None, 32, 128, 128) 0 conv2d_12[0][0]


conv2d_13 (Conv2D) (None, 64, 128, 128) 18496 max_pooling2d_3[0][0]


max_pooling2d_4 (MaxPooling2D) (None, 64, 64, 64) 0 conv2d_13[0][0]


conv2d_14 (Conv2D) (None, 128, 64, 64) 73856 max_pooling2d_4[0][0]


up_sampling2d_3 (UpSampling2D) (None, 128, 128, 128 0 conv2d_14[0][0]


concatenate_3 (Concatenate) (None, 192, 128, 128 0 up_sampling2d_3[0][0]
conv2d_13[0][0]


conv2d_15 (Conv2D) (None, 64, 128, 128) 110656 concatenate_3[0][0]


up_sampling2d_4 (UpSampling2D) (None, 64, 256, 256) 0 conv2d_15[0][0]


concatenate_4 (Concatenate) (None, 96, 256, 256) 0 up_sampling2d_4[0][0]
conv2d_12[0][0]


conv2d_16 (Conv2D) (None, 32, 256, 256) 27680 concatenate_4[0][0]


conv2d_17 (Conv2D) (None, 1, 256, 256) 33 conv2d_16[0][0]

Total params: 231,617
Trainable params: 231,617
Non-trainable params: 0
然后发现数据训练val_acc: 0.4705最高了,然后多次epoch都是0.4705没有进一步提高。
同时测试的结果mask输出都是1。感觉网络好像没有学习到有效的分割信息。
所以想问一下这个情况,会有哪些可能性?可以如何解决?

TypeError: 'NoneType' object is not callable

你好,在我巡行segnet_train的时候出现没有回调函数这个对象的错误,模型能够保存出来,但是无法绘制出精度和损失率的图像,请问该怎样解决

backend is tensorflow or theano

I meet some errors like this:
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,128,64].
what's the reason behind it ,how to solve it? Thank you!

Parameter setting for the model, help please.

Hello, thank you very much for sharing the codes. I want to know the parameter setting for the segnet and unet. The parameters are same with that in your codes? Or may be you can share the trained model to me. Waiting for your reply, thank you again. Good Luck!

训练集的标签怎么做的?

我试了两种方法,第一种用labelme对我自己的原始两张遥感大图做标签,转化为mask图像后像素值范围是0-255,故利用划分像素值的方法把图像分为0、1两种像素(我的目前只有一类),预测出来的结果把1标签对应255像素转化过来,0像素不变,图像显示很乱,多是锯齿状,没有正常图片的圆滑,不知道是我训练的过程有问题还是标签有问题还是我最后预测结果可视化的问题,现在想请教一下大神们标签是怎么做的?
(第二种方法用labelme做完标签之后利用label.png转化为0、1像素,昨晚试验了一下,貌似这种方法训练不通)

about visualize on tensorboard

Hello
I have a question want to ask for you
If I define a new scalars, how can I visualize it on Tensorboard in Keras?
such as I define a new variable learning_rate, It changes over epoch, how I can visualize it on tensorboard?
I used keras, backend is tensorflow.
Looking forward to your reply.

error when "python segnet_train.py --model segnet.h5"

Traceback (most recent call last):
File "segnet_train.py", line 256, in
train(args)
File "segnet_train.py", line 212, in train
model = SegNet()
File "segnet_train.py", line 200, in SegNet
model.add(Reshape((n_label,img_w*img_h)))
File "/home/gumingqi/anaconda2/envs/segnet/lib/python2.7/site-packages/keras/models.py", line 522, in add
output_tensor = layer(self.outputs[0])
File "/home/gumingqi/anaconda2/envs/segnet/lib/python2.7/site-packages/keras/engine/topology.py", line 638, in call
output_shape = self.compute_output_shape(input_shape)
File "/home/gumingqi/anaconda2/envs/segnet/lib/python2.7/site-packages/keras/layers/core.py", line 403, in compute_output_shape
input_shape[1:], self.target_shape)
File "/home/gumingqi/anaconda2/envs/segnet/lib/python2.7/site-packages/keras/layers/core.py", line 391, in _fix_unknown_dimension
raise ValueError(msg)
ValueError: total size of new array must be unchanged
another question is if the backend of keras is theano?
thank u!

Could you share all dataset ?

Thank you for your code, I learned a lot from it !
I visited your blog , it is said that there are 5 remote sensing images for training, but I only got 2 images from your Baidu Netdisk. Could you share all images? Thank you !

Unet训练完一个Epoch后出现错误

Could someone help ?

Error:
ValueError: Error when checking target: expected conv2d_19 to have 4 dimensions, but got array with shape (5, 256, 256)

Summary :
Layer (type) Output Shape Param # Connected to
input_1 (InputLayer) (None, 256, 256, 3) 0
conv2d_1 (Conv2D) (None, 256, 256, 32) 896 input_1[0][0]
conv2d_2 (Conv2D) (None, 256, 256, 32) 9248 conv2d_1[0][0]
max_pooling2d_1 (MaxPooling2D) (None, 128, 128, 32) 0 conv2d_2[0][0]
conv2d_3 (Conv2D) (None, 128, 128, 64) 18496 max_pooling2d_1[0][0]
conv2d_4 (Conv2D) (None, 128, 128, 64) 36928 conv2d_3[0][0]
max_pooling2d_2 (MaxPooling2D) (None, 64, 64, 64) 0 conv2d_4[0][0]
conv2d_5 (Conv2D) (None, 64, 64, 128) 73856 max_pooling2d_2[0][0]
conv2d_6 (Conv2D) (None, 64, 64, 128) 147584 conv2d_5[0][0]
max_pooling2d_3 (MaxPooling2D) (None, 32, 32, 128) 0 conv2d_6[0][0]
conv2d_7 (Conv2D) (None, 32, 32, 256) 295168 max_pooling2d_3[0][0]
conv2d_8 (Conv2D) (None, 32, 32, 256) 590080 conv2d_7[0][0]
max_pooling2d_4 (MaxPooling2D) (None, 16, 16, 256) 0 conv2d_8[0][0]
conv2d_9 (Conv2D) (None, 16, 16, 512) 1180160 max_pooling2d_4[0][0]
conv2d_10 (Conv2D) (None, 16, 16, 512) 2359808 conv2d_9[0][0]
up_sampling2d_1 (UpSampling2D) (None, 32, 32, 512) 0 conv2d_10[0][0]
concatenate_1 (Concatenate) (None, 32, 32, 768) 0 up_sampling2d_1[0][0]
conv2d_8[0][0]
conv2d_11 (Conv2D) (None, 32, 32, 256) 1769728 concatenate_1[0][0]
conv2d_12 (Conv2D) (None, 32, 32, 256) 590080 conv2d_11[0][0]
up_sampling2d_2 (UpSampling2D) (None, 64, 64, 256) 0 conv2d_12[0][0]
concatenate_2 (Concatenate) (None, 64, 64, 384) 0 up_sampling2d_2[0][0]
conv2d_6[0][0]
conv2d_13 (Conv2D) (None, 64, 64, 128) 442496 concatenate_2[0][0]
conv2d_14 (Conv2D) (None, 64, 64, 128) 147584 conv2d_13[0][0]
up_sampling2d_3 (UpSampling2D) (None, 128, 128, 128 0 conv2d_14[0][0]
concatenate_3 (Concatenate) (None, 128, 128, 192 0 up_sampling2d_3[0][0]
conv2d_4[0][0]
conv2d_15 (Conv2D) (None, 128, 128, 64) 110656 concatenate_3[0][0]
conv2d_16 (Conv2D) (None, 128, 128, 64) 36928 conv2d_15[0][0]
up_sampling2d_4 (UpSampling2D) (None, 256, 256, 64) 0 conv2d_16[0][0]
concatenate_4 (Concatenate) (None, 256, 256, 96) 0 up_sampling2d_4[0][0]
conv2d_2[0][0]
conv2d_17 (Conv2D) (None, 256, 256, 32) 27680 concatenate_4[0][0]
conv2d_18 (Conv2D) (None, 256, 256, 32) 9248 conv2d_17[0][0]
conv2d_19 (Conv2D) (None, 256, 256, 1) 33 conv2d_18[0][0]

Total params: 7,846,657
Trainable params: 7,846,657
Non-trainable params: 0

Code:
`

def unet():
inputs = Input((img_w, img_h,3))

conv1 = Conv2D(32, (3, 3), activation="relu", padding="same")(inputs)
conv1 = Conv2D(32, (3, 3), activation="relu", padding="same")(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

conv2 = Conv2D(64, (3, 3), activation="relu", padding="same")(pool1)
conv2 = Conv2D(64, (3, 3), activation="relu", padding="same")(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

conv3 = Conv2D(128, (3, 3), activation="relu", padding="same")(pool2)
conv3 = Conv2D(128, (3, 3), activation="relu", padding="same")(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

conv4 = Conv2D(256, (3, 3), activation="relu", padding="same")(pool3)
conv4 = Conv2D(256, (3, 3), activation="relu", padding="same")(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)

conv5 = Conv2D(512, (3, 3), activation="relu", padding="same")(pool4)
conv5 = Conv2D(512, (3, 3), activation="relu", padding="same")(conv5)

up6 = concatenate([UpSampling2D(size=(2, 2))(conv5), conv4], axis=3)
conv6 = Conv2D(256, (3, 3), activation="relu", padding="same")(up6)
conv6 = Conv2D(256, (3, 3), activation="relu", padding="same")(conv6)

up7 = concatenate([UpSampling2D(size=(2, 2))(conv6), conv3], axis=3)
conv7 = Conv2D(128, (3, 3), activation="relu", padding="same")(up7)
conv7 = Conv2D(128, (3, 3), activation="relu", padding="same")(conv7)

up8 = concatenate([UpSampling2D(size=(2, 2))(conv7), conv2], axis=3)
conv8 = Conv2D(64, (3, 3), activation="relu", padding="same")(up8)
conv8 = Conv2D(64, (3, 3), activation="relu", padding="same")(conv8)

up9 = concatenate([UpSampling2D(size=(2, 2))(conv8), conv1], axis=3)
conv9 = Conv2D(32, (3, 3), activation="relu", padding="same")(up9)
conv9 = Conv2D(32, (3, 3), activation="relu", padding="same")(conv9)

conv10 = Conv2D(n_label, (1, 1), activation="sigmoid")(conv9)
#conv10= Conv2D(n_label, (1, 1), activation="softmax")(conv9)

model = Model(inputs=inputs, outputs=conv10)
model.compile(optimizer='Adam', loss='binary_crossentropy', metrics=['accuracy'])
return model

`

About running Unet_trian.py

Hello, I now run the program will report such an error:
ValueError:Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 32, 32, 512), (None, 32, 32, 256)].
How should this be solved? Looking forward to your reply, thank you

use unet_predict.py for multi-classification

pred = model.predict(crop,verbose=0)
pred = pred.reshape((256,256)).astype(np.uint8)
使用unet_predict.py进行多分类预测时,到pred = pred.reshape((256,256)).astype(np.uint8)报错,意思是不能将256x256x5个字节转换为256x256,所以这里要改成pred = pred.reshape((256,256, 5)).astype(np.uint8)吗?

Would you provide code of preprocess data?

Thank you for your code, I learned a lot from it !
I visited your blog , it is useful to me that you process big image to 252*256 small pictures.
Would you share code of preprocess data?
Thanks.

数据分割过程中断

运行gen_dataset.py后报错如下,此时设定的文件夹里文件数量不一定,每次运行完都清空,发现有时能出现1张,有时4张,或者9张。我是在windows python3.6下运行的
File "D:/untitled/nlp/gen_dataset.py", line 18, in gamma_transform
return cv2.LUT(img, gamma_table)
TypeError: Expected cv::UMat for argument 'src'
再次运行报错也不一定相同,如
File "D:/untitled/nlp/gen_dataset.py", line 18, in gamma_transform
return cv2.LUT(img, gamma_table)
TypeError: Expected cv::UMat for argument 'img'
另一次
temp_x = np.random.randint(0, img.shape[0])
AttributeError: 'tuple' object has no attribute 'shape'

invalid size

博主您好,我在跑segnet_predict.py的时候 出现invalid size的输出,请问这个怎么解决,同时会输出图片是全黑的,静候您的回答

训练好的模型

请问楼主有训练好的模型吗?
实验室只有一个1050ti,我想看一下效果,自己训练不出来,希望楼主能上传一下训练好的模型

ValueError:y contains previously unsee labeels: 38.0

在我将n_label设置为2,classe设置为【0.,1.】的时候进行二分类训练,出现了下面这个错误:ValueError:y contains previously unsee labeels: 38.0
我该如何解决它?麻烦大家帮帮忙,谢谢!

something wrong

raise ValueError('Unsupported image shape: ', x.shape)
ValueError: ('Unsupported image shape: ', ())
I have this problem, can someone help me solve it? Thank you.

能否提供Unet做多分类的代码

您好,非常感谢您把算法代码提供,我想在你代码基础上测试一下用Unet做多分类,但是在修改网络和generateTrainData时,总是不对,能否提供一下帮助,将Unet做多分类的代码提供,谢谢

about running segnet models value error

when i was run the segnet.py raised this error:
raise ValueError('Unsupported image shape: ', x.shape)
ValueError: ('Unsupported image shape: ', ())
i would be grateful if you could answer this questions @AstarLight
thank you so much !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.