Git Product home page Git Product logo

aichallenger / ai_challenger_2017 Goto Github PK

View Code? Open in Web Editor NEW
474.0 474.0 197.0 86.58 MB

AI Challenger, a platform for open datasets and programming competitions to artificial intelligence (AI) talents around the world.

Home Page: https://challenger.ai/

Python 47.77% Jupyter Notebook 8.14% Shell 0.04% Perl 0.10% Prolog 0.19% OpenEdge ABL 43.60% Smalltalk 0.01% Emacs Lisp 0.11% JavaScript 0.01% NewLisp 0.01% Ruby 0.01% Slash 0.01% SystemVerilog 0.01%

ai_challenger_2017's People

Contributors

aichallenger avatar bmyan avatar hitvoice avatar kingulight avatar leonlulu avatar zhhezhhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai_challenger_2017's Issues

machine translation baseline run.sh error

Got this error when execute run.sh after run prepare.sh successfully

n_baseline/train$ ./run.sh 
python: can't open file '../tensor2tensor/bin/t2t-trainer': [Errno 2] No such file or directory
python: can't open file '../tensor2tensor/bin/t2t-datagen': [Errno 2] No such file or directory
python: can't open file '../tensor2tensor/bin/t2t-trainer': [Errno 2] No such file or directory

system : ubuntu 16.04
Is there something I missed?

BLEU_4 score bug in LeaderBoard?

Our BLEU_4 score (0.73298) on the LB is approximately equal to the BLEU_1 score (0.73563) we measured offline on eval dataset, and much higher than our offline BLEU_4 score (0.41735).

I wonder if the score display on the LB is wrong.

about submit json file

in caption_validation_annotations_20170910.json
"image_id": "3cd32bef87ed98572bac868418521852ac3f6a70.jpg"
So the in the predicted json file for submit just be like below?
[
{
"caption": "一个面对着蓝天大海的女人坐在海边的沙滩椅子",
"image_id": "3cd32bef87ed98572bac868418521852ac3f6a70"
}
]
remove '.jpg' for each image ?

license information

Hi, what is the license for this data use? Can it be used for commercial purposes? Thanks

label standard?

0a4dc5cce5168f04439770a010e77304e507cec8

0a4dc5cce5168f04439770a010e77304e507cec8

Can you tell me why there is only 1 person in the label??Why the little boy does not be labeled?? what is the standard of labeling person??

why does this have 3 person label
0a5de767f71c81be103c1ba3739dbbe5c98ed3f5

but this has 3????

0a2a8aa8f22bd58950d94a73c564aa62b0f2577f

I really can not understand your standard of judging a person. and the above images is choosed randomly from 50 images. And I dont know for the following images, how many peosons have you labeled
0a05ba0a6c0f53160384184f3b321bd51a841d00
I hope you can give me one standard of judging a person. Thanks

python scene_eval.py ...error

iebsn@iebsn-HP-Z440-Workstation:~/project/scene-classification/AI_Challenger_2017-master/Evaluation/scene_classification_eval$ ### python scene_eval.py --submit ./s
ubmit.json --ref ./ref.json

warnning: lacking image 7df98fcd7a85281f845910af403ba65ca1494b60.jpg in your submission file
Evaluation time of your result: 0.014003 s
{'warning': ['Inconsistent number of images between submission and reference data \n', u'lacking image 7df98fcd7a85281f845910af403ba65ca1494b60.jpg in your submission file \n'], 'score': '0.8', 'error': []}

when I run eg.eval file,I got this error,how to sovle it ? thanks.

run_evaluations_test.py 报错

python run_evaluations_test.py
报错信息:
loading annotations into memory...
0:00:00.000170
creating index...
index created!
list indices must be integers, not str
.loading annotations into memory...
0:00:00.000142
creating index...
index created!
list indices must be integers, not str

图像中文描述问题,总是提交格式错误?

按照示例当中的代码保存为json格式,还是会出现格式错误问题??
with io.open('result6.json', 'w', encoding='utf-8') as fd: fd.write(unicode(json.dumps(data,ensure_ascii=False, sort_keys=True, indent=2, separators=(',', ': '))))

caption eval hang run_evaluations_test.py

hang on computing meteor

Error: Could not find or load main class edu.stanford.nlp.process.PTBTokenizer
Error: Could not find or load main class edu.stanford.nlp.process.PTBTokenizer
setting up scorers...
computing Bleu score...
{'reflen': 0, 'guess': [0, 0, 0, 0], 'testlen': 0, 'correct': [0, 0, 0, 0]}
ratio: 1e-06
Bleu_1: 0.000
Bleu_2: 0.000
Bleu_3: 0.000
Bleu_4: 0.000
computing METEOR score...

# Note:亲,能帮忙再发一下嘛,失效了

Note:

数据集是AI Challenger第一届比赛的三个赛道的数据集:Caption,keypoints, Scene
本数据集属于创新工场,作为研究目的所使用,不能作为商业目的使用。
关于数据集的所有解释权归创新工场所拥有。现友情提供下载链接, 仅限用于学术研究,仅限用于学术研究,仅限用于学术研究 。

Only for research

Only for research

Only for research

Caption train:链接:https://pan.baidu.com/s/1YziBPLiU2WmE0j35oaXeKw 密码:asix
Caption validation :链接:https://pan.baidu.com/s/1p_0V89d4wfxk-7f7QsU9rg 密码:dcnn

Keypoint train:链接:https://pan.baidu.com/s/1soAkYImmQrXnSsxcF-YjxA 密码:43om
Keypoint validation:链接:https://pan.baidu.com/s/16pnIBBRqU16noVlZh-ksYA 密码:ti41

Scene train :链接:https://pan.baidu.com/s/1ZOJosoulaW2U_E9nHM8NeA 密码:vou3
Scene validation :链接:https://pan.baidu.com/s/1qHVnZ8T59ioetzVv14-grQ 密码:5ogk

Originally posted by @zhhezhhe in #42 (comment)

关于数据使用目的

您好!
首先,感谢你们提供高质量的数据。
请问,这些数据可用为商业目的吗?还是只能许可研究目的?

谢谢🙏

I cann't download the data

I submitted the authentication, but has been in the audit status, can you help me to solve it, I want to download data for research

Could you share it on dropbox or google drive?

Hey guys, could you please share it on google drive, dropbox, S3 buckets or somewhere you can easily host datasets, and it is super cheap.

百度云盘 is really annoying. And they keep getting expired after a few days.

场景分类baseline 训练时accuracy用的不是top3?

`

train_step, cross_entropy, logits, keep_prob = network.inference(features, one_hot_labels)      

correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_labels, 1))     

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

`

caption_eval 运行测试demo报错

在目录 AI_Challenger/AI_Challenger_eval_public/caption_eval 下运行

python run_evaluations.py -submit ./data/id_to_test_caption.json -ref ./data/id_to_words.json

报错如下
loading annotations into memory...
0:00:00
creating index...
index created!
Loading and preparing results...
Building prefix dict from the default dictionary ...
Loading model from cache c:\users\lc\appdata\local\temp\jieba.cache
Loading model cost 0.420 seconds.
Prefix dict has been built succesfully.
{'error': 1}

OS:windows 10
python版本:2.7.13

is this a known error?

NotFoundError: /home/sk/ai_challenger_caption_train_20170902/caption_train_images_20170902/e55ba6db106e61f50802ed3547b325ced2e32a3a.jpg

中文评价

能提供一下id_to_word.json的生成脚本的问题吗?或者是中文评价的数据集,注释等等,谢谢!

Request for Access to Dataset for Testing

As part of my research on the MMPose library, I require access to a dataset for testing and validation purposes. I am particularly interested in datasets with various human poses in different environments. This data will be used exclusively for academic purposes and handled confidentially. Your support would greatly contribute to the success of my thesis. Thank you for considering my request.

ValueError: You must specify one of the supported problems to generate data for:

翻译任务中,直接利用给定的run.sh运行,出现下面的错误,有没有人知道为什么呢?

Traceback (most recent call last):
File "../tensor2tensor/tensor2tensor/bin/t2t-datagen", line 213, in
tf.app.run()
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "../tensor2tensor/tensor2tensor/bin/t2t-datagen", line 160, in main
raise ValueError(error_msg)
ValueError: You must specify one of the supported problems to generate data for:

Traceback (most recent call last):
File "../tensor2tensor/tensor2tensor/bin/t2t-trainer", line 96, in
tf.app.run()
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "../tensor2tensor/tensor2tensor/bin/t2t-trainer", line 92, in main
schedule=FLAGS.schedule)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 352, in run
hparams=hparams)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 193, in run
experiment = wrapped_experiment_fn(run_config=run_config, hparams=hparams)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 79, in wrapped_experiment_fn
experiment = experiment_fn(run_config, hparams)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 123, in experiment_fn
run_config=run_config)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 135, in create_experiment
run_config=run_config)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 183, in create_experiment_components
add_problem_hparams(hparams, FLAGS.problems)
File "/home/ninghongke/tensorflow/lib/python2.7/site-packages/tensor2tensor/utils/trainer_utils.py", line 245, in add_problem_hparams
raise LookupError(error_msg)
LookupError: translate_enzh not in the set of supported problems:

key point eval error

hello, I see your file for keypoint eval---'keypoint_eval.py', and I think your method has some problems.
in your methods, if your image has 2 person, but you give only 1 and i use my model to predict 2, then the result will be bad, because your code:

oks_all = np.concatenate((oks_all, np.max(oks, axis=0)), axis=0)
oks_num += np.max(oks.shape)

I think this should change to
_oks_all = np.concatenate((oks_all, np.max(oks, axis=1)), axis=0)

oks_num += np.min(oks.shape)_

关于中文caption评测

看了下中文caption的评测程序,发现产生的句子首先经过jieba中文分词了,然后在计算四个指标前又经过 PTBTokenizer 分词了,请问这样做有什么原因吗?为什么不直接只做一步分词?

Why the online evaluation always failed?

Hi, 为何我提交image cpaitoning结果,毫无任何反馈。我已经按照贵方的要求组织json了。按道理应该和mscoco online server一样给个反馈。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.