Git Product home page Git Product logo

east's People

Contributors

songdejia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

east's Issues

TypeError: string indices must be integers

when i run run.sh, i met a problem.
hmean.py, line 29, in compute_hmean
recall = resDict['method']['recall']
TypeError: string indices must be integers
can somenone help me?

How to label a image properly

I know we have to take the blank between one word and another into consideration while labelling a image. If there are two separated words, how do I decide when to put the two words into one label or to label the words separately. And when it comes to a whole sentence, how should I label a image with distortions, should I split the sentence into several parts and label all the parts separately into a quadrangle? Thanks a million :)

LOSS 始终是0.01

您好!利用自己的数据集,数据格式为 x1, y1, x2, y2, x3, y3, x4, y4, "###"
训练的时候loss始终是0.01,具体如下:
image
EAST <==> TRAIN <==> Epoch: [0][372/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][373/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][374/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][375/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][376/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][377/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][378/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][379/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][380/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][381/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][382/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][383/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][384/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][385/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][386/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][387/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][388/430] Loss 0.0100 Avg Loss 0.0100)

EAST <==> TRAIN <==> Epoch: [0][389/430] Loss 0.0100 Avg Loss 0

Not able to find gt.zip on the given website

HI,

I am trying to make the EAST algorithm work and I am training on ICDAR 2015 dataset. However, I am not able to find the gt.zip file and also not able to understand if it only contains the ground_truths for train/test images. Can someone throw some light on this and help me understand this?

My code breaks at the point where it says There is no gt.zip(obviously) as I dont have the zip and dont know how to make one.

Train East.pytorch

Hello, I tried to train on ICDAR with your East's pytorch version, iterating nearly 8,000 epoch, but found that the effect is particularly bad. Is there any skill in training? I have tried other pytorch versions of East and found that the effect is not good. I can't find the reason until now. Can you give me some advice?

网络参数初始化

在utils/init.py 的第10行的classname.find('conv') 应该替换成classname.find('Conv2d')

License uses

I notice that your repository (especially data_util.py and the lanms library) is based on argman/EAST, which is using GPLv3 license.

GPLv3 license is NOT compatible with the MIT license you are currently using. Please change a license which is compatible with GPLv3 or replace relevant libraries in your repository to avoid any copyright issues.

trained model

Dear all,

is it possible to have the trained model?

Thank you,

Cheers

Where to get the gt.zip

I found the code need use the gt.zip to measure the accuracy ,but i don't find the gt.zip in the link given by you ,could you please tell me how to do this?Thanks.

UnsupportedOperation: not writable

捕获

runfile('C:/Users/陈/Desktop/EAST-master/run_demo_server.py', wdir='C:/Users/陈/Desktop/EAST-master')
Traceback (most recent call last):

File "", line 1, in
runfile('C:/Users/陈/Desktop/EAST-master/run_demo_server.py', wdir='C:/Users/陈/Desktop/EAST-master')

File "D:\huanjingdajian\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)

File "D:\huanjingdajian\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "C:/Users/陈/Desktop/EAST-master/run_demo_server.py", line 226, in
main()

File "C:/Users/陈/Desktop/EAST-master/run_demo_server.py", line 223, in main
app.run('0.0.0.0', args.port)

File "D:\huanjingdajian\lib\site-packages\flask\app.py", line 938, in run
cli.show_server_banner(self.env, self.debug, self.name, False)

File "D:\huanjingdajian\lib\site-packages\flask\cli.py", line 629, in show_server_banner
click.echo(message)

File "D:\huanjingdajian\lib\site-packages\click\utils.py", line 259, in echo
file.write(message)

UnsupportedOperation: not writable

Implementation of QUAD part of the paper

How do you modify the geometry map generation for QUAD part of the paper? what does it mean by the statement "For the QUAD ground truth, the value of each pixel with positive score in the 8-channel geometry map is its coordinate shift from the 4 vertices of the quadrangle" I would like to know all the modification needed to be made in the code for implementing for QUAD part. How to modify the geometry map generation for QUAD method ? @songdejia

Exception continueException in getitem

After the pre-trained, I want to train the model on my own datasets, and I prepared my training set according to the requirements in the README. But when I run the run.py , problems occured:
EAST <==> Prepare <==> Network <==> Done
Exception continue
Exception in getitem, and choose another index:133
Exception continue

So how could I solve this problem? Thank you.

Predict problem

How can i predict one image, and can you provide the trained model like the version of tensorflow ?

python version

Could you please tell me what python version you use in this repo?

可以多GPU同时训练吗?

我想多GPU同时训练,该怎么修改代码?
from keras.utils import multi_gpu_model
parallel_model = multi_gpu_model(east_network, gpus=4
加入这兩行代码,發現報錯,謝謝

ImportError: bad magic number in 'geo_map_cython_lib': b'\x03\xf3\r\n'

the detail about the error is :Traceback (most recent call last):
File "/home/user/east/EAST-pytorch/main.py", line 11, in
from data_utils import custom_dset, collate_fn
File "/home/user/east/EAST-pytorch/data_utils.py", line 18, in
from geo_map_cython_lib import gen_geo_map
i search a lot , but not solved this error,so how to solve the error to run the code correctly?thanks..

compile error (lanms)

g++: error: unrecognized command line option ‘-fno-plt’
Makefile:10: recipe for target 'adaptor.so' failed
make: *** [adaptor.so] Error 1

Pretrained model

Hi, great repo !
It would be very helpful if a pretrained model will be provided along with the code, when such model will be available ?

Thanks in advance,
Arseny

Shape of the Output F-Score

Hi,
The F-Score output by the EAST model is of the shape (W/4, H/4) where W and H are width and height of the input image respectively. Shouldn't the F-Score be a per pixel score, and so shouldn't its dimension be (W, H) instead?

(I know this repo works so perhaps there is a big gap in my understanding of the code. Kindly help.)

Thanks

where is the run.py

l have download your code,then change the dataset path,but i couldn't find where is run.py In your readme.md, you have noticed as fllows:
If you want to train the model, you should provide the dataset path in config.py and run

sh run.py

nan during training.

Hi @songdejia, thanks for trying to port EAST from tensorflow. But while trying to train this model on COCO 2014 or Oxford syn text, I get nan during training. Any ideas?

Please see below training Log:

Cross point does not exist
point dist to line raise Exception
point dist to line raise Exception
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
point dist to line raise Exception
point dist to line raise Exception
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
point dist to line raise Exception
point dist to line raise Exception
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
point dist to line raise Exception
point dist to line raise Exception
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Cross point does not exist
Exception continue
Exception in getitem, and choose another index:4393
EAST <==> TRAIN <==> Epoch: [0][1/227] Loss 0.0231 Avg Loss 0.0250)

EAST <==> TRAIN <==> Epoch: [0][2/227] Loss 0.0282 Avg Loss 0.0260)

EAST <==> TRAIN <==> Epoch: [0][3/227] Loss 0.0313 Avg Loss 0.0273)

EAST <==> TRAIN <==> Epoch: [0][4/227] Loss 0.0271 Avg Loss 0.0273)

EAST <==> TRAIN <==> Epoch: [0][5/227] Loss 0.0206 Avg Loss 0.0262)

EAST <==> TRAIN <==> Epoch: [0][6/227] Loss 0.0300 Avg Loss 0.0267)

EAST <==> TRAIN <==> Epoch: [0][7/227] Loss 0.0239 Avg Loss 0.0264)

EAST <==> TRAIN <==> Epoch: [0][8/227] Loss 0.0271 Avg Loss 0.0265)

EAST <==> TRAIN <==> Epoch: [0][9/227] Loss 0.0284 Avg Loss 0.0266)

EAST <==> TRAIN <==> Epoch: [0][10/227] Loss 0.0197 Avg Loss 0.0260)

EAST <==> TRAIN <==> Epoch: [0][11/227] Loss nan Avg Loss nan)

EAST <==> TRAIN <==> Epoch: [0][12/227] Loss nan Avg Loss nan)

lacks of L2 regularzation

Hi bro,I read the code of yours and the src tf version,and I use your code to train but found can get 0.4 hmean on ic2015 test dataset,and I found that in your implemention,the network lacks L2 regularzation while the tf version has a 1e-5 L2 loss in the total loss.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.