lvis-dataset / lvis-api Goto Github PK
View Code? Open in Web Editor NEWPython API for LVIS Dataset
Home Page: http://lvisdataset.org
License: Other
Python API for LVIS Dataset
Home Page: http://lvisdataset.org
License: Other
Really hope you can help on this, I can't find the group of [email protected] on google groups? should we just send emails to this address?
Does the weighted sum of APr, APc, APf equal to AP?
Hi!
Short question:
Is there is a file like https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json, but for all LVIS labels?
In more details:
According to the paper, as far as I understood, all LVIS labels should be a subset of a WordNet synsets.
The resulting vocabulary contains 1723 synsets—the upper bound on the number of categories that can appear in LVIS.
I explore intersection between LVIS labels and WordNet noun synsets, and there are 15 labels that has no exact match at a first glance. But all of them can be found in a WordNet using different names, for example:
Such a correspondences I evaluate manually wrt synset description and image examples, as a result I confident that those labels has exactly the same meaning as a synsets.
For the rest of the labels it is overwhelming to find a correspondence manually, but if I do it just by word matching it can lead to a mistakes, for example:
For a label vent from LVIS there are 5 different noun synsets from WordNet, such as:
And wrt LVIS images only first is the right one.
So my question is: how can I find such LVIS labels to WordNet synset correspondences automatically, whether it is possible?
Thanks for the clean code!
I have a confusion about the _get_gt_dt
function,
in line
Line 158 in 031ac21
Line 163 in 031ac21
cat_id
is not used, should the cat_id
be _cat_id
actually?
i.e., for _ann in self._gts[img_id, _cat_id]
?
Anyway, the default value of use_cats
is set to 1, so this piece of code is not used in evaluation.
Thanks for offering this api.
I can't open the website https://www.lvisdataset.org/,so I want to ask you how to get the dataset by this api or the instruction of this api.
Thank you !!!
Hi! I have a question regarding "lvis-api/data/coco_to_synset.json" file: in the COCO_SYNSET_CATEGORIES where each synset is mapped to a coco id, the ids range from 1 to 90, which means 10 ids are not present since there are only 80 classes in coco. This causes classes that have a coco id>80 to not count in the evaluation (like it is in detectron2). Why did u choose to do this?
Tried installing lvis api, but getting this error
ERROR: Could not find a version that satisfies the requirement matplotlib==3.1.1 (from lvis) (from versions: 0.86, 0.86.1, 0.86.2, 0.91.0, 0.91.1, 1.0.1, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.3.0, 1.3.1, 1.4.0, 1.4.1rc1, 1.4.1, 1.4.2, 1.4.3, 1.5.0, 1.5.1 , 1.5.2, 1.5.3, 2.0.0b1, 2.0.0b2, 2.0.0b3, 2.0.0b4, 2.0.0rc1, 2.0.0rc2, 2.0.0, 2.0.1, 2.0.2, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.2.0rc1, 2.2.0, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 3.0.0rc2, 3.0.0, 3.0.1, 3.0.2, 3.0.3) ERROR: No matching distribution found for matplotlib==3.1.1 (from lvis)
I downloaded training set of 1,270,141 instances (1 GB), and 100,170 images (18 GB) from the website "https://www.lvisdataset.org/dataset". I wonder where is the ground truth of the segmentation masks of these 100,170 images? Thanks.
Hey folks,
Looks like there are 5K images in the val2017 set available here: http://images.cocodataset.org/zips/val2017.zip
The v1.0 description, however, says that there are 19,809 images in the val set.
Not sure if I am using any outdated link for the val set ?
Regards,
Viresh
Hello everyone, I'm trying to use detectron2 with LVIS and I see that I'm unable to detect people using LVIS instances.
Someone could tell me why?
Thank you for your help.
Hi @agrimgupta92 ,
As my understanding, the LVIS has more than 1200 categories and each category will have a colormap for visualization. However, I checked the result of the below method is only ~80 colormaps have been defined.
def get_color(self, idx):
color_list = colormap(rgb=True) / 255
return color_list[idx % len(color_list), 0:3]
So, is there something wrong here? Thanks
Hi, I want to download the LVIS 0.5 dataset, but now only the 1.0 version is available. How can I download the LVIS 0.5 dataset? Thank you.
The LVIS website points that the validation set has 20k images, but when I downloaded it, there were only 5k images.
Where can I download the 20k validation set? Can you give me a link?
The results of eval.AI is inconsistent with results of lvis api. Do you use different evalution code on eval.AI?
Hi, I use the nltk.wordnet to get the sysnet 'stop_sign.n.01', but get the following error:
>>from nltk.corpus import wordnet as wn
>>wn.synset('stop_sign.n.01')
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/miniconda3/lib/python3.7/site-packages/nltk/corpus/reader/wordnet.py in synset(self, name)
1333 try:
-> 1334 offset = self._lemma_pos_offset_map[lemma][pos][synset_index]
1335 except KeyError:
KeyError: 'n'
During handling of the above exception, another exception occurred:
WordNetError Traceback (most recent call last)
<ipython-input-53-b431537ed60c> in <module>
----> 1 wn.synset('stop_sign.n.01')
~/miniconda3/lib/python3.7/site-packages/nltk/corpus/reader/wordnet.py in synset(self, name)
1335 except KeyError:
1336 message = 'no lemma %r with part of speech %r'
-> 1337 raise WordNetError(message % (lemma, pos))
1338 except IndexError:
1339 n_senses = len(self._lemma_pos_offset_map[lemma][pos])
WordNetError: no lemma 'stop_sign' with part of speech 'n'
would you like to give some solutions? which version of wordnet do you use?
Thanks
The link to explore the LVIS dataset on the official website (https://www.lvisdataset.org/explore) is currently unavailable.
LVIS API on PyPI is not update-to-date after recent changes to support numpy==1.18.
Could you update the package on PyPI and also update the python package versions in requirement.txt?
Thanks!
The class 'hot-dog' in coco is desribed in coco_to_synset.json as follows:
"hot dog": {"coco_cat_id": 58, "meaning": "a smooth-textured sausage, usually smoked, often served on a bread roll", "synset": "frank.n.02"},
but there is no "frank.n.02" in lvis classes. What's the corresponding class for "hot-dog" in lvis?
Thanks!
Thanks for this awesome dataset. I wonder when can I access the test set and where to participate in the LVIS challenge.
Logging can interfere with the logging used in object detection toolboxes like mmdetection.
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
from lvis.lvis import LVIS
to a file (e.g. mmdet/datasets/coco.py
)python3 tools/train.py configs/faster_rcnn_r50_fpn_1x.py
The training function will start its printed outputs with something like:
[10/02 00:35:51] root WARNING: The model and loaded state dict do not match exactly
and no logging outputs will be printed during the training.
Normally the first printed lines are:
2019-10-02 00:38:32,040 - INFO - Distributed training: False
2019-10-02 00:38:32,557 - INFO - load model from: torchvision://resnet50
2019-10-02 00:38:33,420 - WARNING - The model and loaded state dict do not match exactly
and logging outputs are printed during training, e.g.:
2019-10-02 00:40:31,962 - INFO - Start running, host: xxx@zzz, work_dir: $CURRENT_DIR/work_dirs/faster_rcnn_r50_fpn_1x
2019-10-02 00:40:31,967 - INFO - workflow: [('train', 1)], max: 12 epochs
2019-10-02 00:40:03,400 - INFO - Epoch [1][50/14186] lr: 0.00199, eta: 1 day, 5:42:13, time: 0.628, data_time: 0.031, memory: 3852, loss_rpn_cls: 0.4321, loss_rpn_bbox: 0.0947, loss_cls: 1.2581, acc: 90.4229, loss_bbox: 0.1021, loss: 1.8871
2019-10-02 00:40:27,730 - INFO - Epoch [1][100/14186] lr: 0.00233, eta: 1 day, 2:20:42, time: 0.487, data_time: 0.019, memory: 3852, loss_rpn_cls: 0.3118, loss_rpn_bbox: 0.0992, loss_cls: 0.7181, acc: 93.3760, loss_bbox: 0.1250, loss: 1.2541
This seems to be a conflict between the two loggers. My quick fix was, to remove the logger in the lvis api
and replace it by print
functions where appropriate. I am happy to submit this as a pull request but I guess the issue needs some discussion and a decision from your side, how to proceed.
The LICENSE
and requirements.txt
files are missing on the PyPi distribution.
Steps to reproduce the behavior.
Download https://files.pythonhosted.org/packages/ea/fe/c18531099e7538bd6a53de8b2f8e900a5cf6a82d0c603325031a4122da5a/lvis-0.5.3.tar.gz and check file contents.
File should contain LICENSE
and requirements.txt
files.
I wonder if there is a way to know what each category correspond to?
for example in coco we do
catIds = coco.getCatIds(catNms=['cat']);
o get all the images that have cats, how can we do it here as well?
Thanks
Hi to everybody, I would convert the annotations file downloaded from lvisdataset.org in COCO format. I see these are not the same also if they are in JSON format.
I would know how can I do this.
Someone could help me?
Thank you in advance,
Antonio.
how to get lvis v0.5?
Hi @agrimgupta92,
Is there any instruction for using colormap for visualization? Especially, segmentation task.
Thanks!
Hi, how can I evaluate the mAP of each class?
https://github.com/NVlabs/FreeSOLO/blob/main/LICENSE
FreeSOLO use this function to evaluate on coco, but seemingly lvis don‘t have .loadRes,
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
annType = 'segm'
prefix = 'instances'
print('Running demo for {} results.'.format(annType))
dataDir='datasets/coco/'
dataType='val2017'
annFile = '{}/annotations/{}_{}.json'.format(dataDir,prefix,dataType)
cocoGt=COCO(annFile)
resFile = 'training_dir/FreeSOLO_pl/inference/coco_instances_results.json'
#resFile = 'demo/instances_val2017_densecl_r101.json'
cocoDt=cocoGt.loadRes(resFile)
cocoEval = COCOeval(cocoGt,cocoDt,annType)
cocoEval.params.useCats = 0
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
Hey. We found a few issues with the annotations of the VAL set.
For this image:
Image ID: 19924
Which can be found on 'explore' by looking up 'necktie' and 'hat'.
And also for this image:
Image ID: 338304
Which can be found on 'explore' by looking up 'sheep' and 'hat'.
It appears that these annotations are rotated.
We are looking to use this dataset now and are concerned that other images, e.g. from the training set, could have the same problem. Is there some assurance that could be given that this is not a problem for the rest of the images?
Thanks a lot for making this dataset. It is super useful.
Regards,
Jonathon Luiten
Hi, I didn't see any GETTING_STARTED or Guiding instructions on the internet. Is it possible to release any of that? I see there are still some differences between lvis api and cocoapi.
Could we replace ==
with >=
so that the package version won't be forced fallback?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.