Git Product home page Git Product logo

Comments (4)

huoyijie avatar huoyijie commented on June 16, 2024 24

1.Does this mean to train the network in a coarse-to-fine process, which initals the network from 256x256 and then finetunes it on larger sizes?
Yes
2.Does this accelerate the converge of the network than train it on size 736x736 directly?
Yes, because direct training on size 736 is very slow

my own training method:
set cfg.train_task_id = '2T256'
set patience 5(between 2~6)
python preprocess.py && python label.py && python advanced_east.py

when end of training, copy the best saved weights file(.h5) to initials the training of size 384, modify cfg.train_task_id = '2T384' and cfg.initial_epoch="the ending epoch" and cfg.load_weights=True and continue train.

then train 512 and so on. You could try this method, maybe there are better ways.

from advancedeast.

hcnhatnam avatar hcnhatnam commented on June 16, 2024

@huoyijie whether the network still remembers what they learned in 256 while training 736?

from advancedeast.

globalmaster avatar globalmaster commented on June 16, 2024

Hi,
I download tianchi ICPR dataset,set cfg.train_task_id = '3T256',run python3 preprocess.py && python3 label.py && python3 advanced_east.py. But I get this error. The output information is shown below. How can I fix this error? Can you help me?@LucyLu-LX @huoyijie @hcnhatnam

Epoch 00008: val_loss improved from 0.43569 to 0.42750, saving model to model/weights_3T256.008-0.427.h5
Epoch 9/24
1125/1125 [==============================] - 157s 139ms/step - loss: 0.2762 - val_loss: 0.4373

Epoch 00009: val_loss did not improve from 0.42750
Epoch 10/24
1125/1125 [==============================] - 156s 139ms/step - loss: 0.2579 - val_loss: 0.4435

Epoch 00010: val_loss did not improve from 0.42750
Epoch 11/24
1125/1125 [==============================] - 156s 139ms/step - loss: 0.2466 - val_loss: 0.4710

Epoch 00011: val_loss did not improve from 0.42750
Epoch 12/24
1125/1125 [==============================] - 156s 139ms/step - loss: 0.2342 - val_loss: 0.4633

Epoch 00012: val_loss did not improve from 0.42750
Epoch 13/24
1125/1125 [==============================] - 156s 139ms/step - loss: 0.2228 - val_loss: 0.4724

Epoch 00013: val_loss did not improve from 0.42750
Epoch 00013: early stopping

from advancedeast.

hcnhatnam avatar hcnhatnam commented on June 16, 2024

@globalmaster it isn't error. The training is stopped early(early stopping) to avoid overfit. Looks like the model is not converging and this is still my problem. @globalmaster, Can you share for me dataset with google driver link?

from advancedeast.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.