Comments (2)
Hello, this is by design.
Tl;dr: Indeed, using the train
setlist of set_lists_fewview_train
is the best way to train your few-view model.
In more detail, all frames within a category are separated to 6 sets <sequence_set>_<seen|unknown>
, i.e.:
train_unseen
train_known
dev_unseen
dev_known
test_unseen
test_known
The set_lists_fewview_*.json
set lists are defined as follows:
set_lists_fewview_train: {
"train": train_known,
"val": train_known + train_unseen,
"test": train_known + train_unseen,
}
set_lists_fewview_dev: {
"train": train_known,
"val": dev_known + dev_unseen,
"test": dev_known + dev_unseen,
}
set_lists_fewview_test: {
"train": train_known,
"val": dev_known + dev_unseen,
"test": test_known + test_unseen,
}
For your case specifically, the train
setlist of set_lists_fewview_train
contains only the train_known
frames which should be used for training. However, the val
setlist of set_lists_fewview_train
contains train_known
but ALSO train_unseen
. This is why you see that all frames from train
are also in val
.
The "val" set contains also the "train" views because, when validating/testing, one needs to have access to the "known" source views (from the train
set) in order to be able to generate the unseen
views. This requires both known and unseen views to live in the same set of loaded images.
Indeed, if you inspect the eval_batches
files, you will discover that the first (target) frame in an eval batch is always drawn from the unseen
set of frames, while the rest of the frames comes from the known
frames.
In order to find out which frames are known
/unseen
, feel free to inspect the meta.frame_type
fields in frame_annotations.jgz
.
I hope this helps, let me know if further clarification is needed.
from co3d.
Thank you so much for the reply! This is super helpful!
from co3d.
Related Issues (20)
- Statistics for v2 HOT 2
- Wrong depth mask in the dataset HOT 8
- Pure black frames in single sequence dataset HOT 1
- ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (489786,) + inhomogeneous part.
- (1) Where are camera parameters stored in the dataset? (2) About depth-pointcloud consistency. HOT 2
- How to filter samples/sequences with GT depth map and camera matrices? HOT 1
- Centering camera extrinsics to find absolute positions HOT 1
- Downloading issue HOT 5
- Data relationship difference between v1 and v2 HOT 2
- Pytorch independent usage convention HOT 2
- Is it possible to download only one category? HOT 2
- Camera Position Plots Don't Seem to Match Expected Circular Motion HOT 3
- Missing depth information for apple HOT 1
- Difference between v1 and v2 HOT 1
- How to only get 3d point Cloud?
- Camera Intrinsic parameter HOT 2
- Filter accurate pointclouds HOT 6
- CO3D Depth Unit HOT 1
- Confusion about the camera extrinsics HOT 2
- How can i get distortion coefficient? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from co3d.