Git Product home page Git Product logo

taskonomy's Introduction

This repository contains:

for the the following paper:

Amir Zamir, Alexander Sax*, William Shen*, Leonidas Guibas, Jitendra Malik, Silvio Savarese.

TASK BANK DATASET
The taskbank folder contains information about our pretrained models, and scripts to download them. There are sample outputs, and links to live demos. The data folder contains information and statistics about the dataset, some sample data, and instructions for how to download the full dataset.
models cauthron
Task affinity analyses and results Website
This folder contains the raw and normalized data used for measuring task affinities. The webpage of the project with links to assets and demos.
task affinity analyses and results Website front page

Citation

If you find the code, models, or data useful, please cite this paper:

@inproceedings{zamir2018taskonomy,
  title={Taskonomy: Disentangling Task Transfer Learning},
  author={Zamir, Amir R and Sax, Alexander and and Shen, William B and Guibas, Leonidas and Malik, Jitendra and Savarese, Silvio},
  booktitle={2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2018},
  organization={IEEE}
}

taskonomy's People

Contributors

alexsax avatar amir32002 avatar b0ku1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

taskonomy's Issues

design of the transfer net

Hi~I've read your supply file, and I'm still kind of confused about the design of your transfer net.

From my understanding, your transfer net is encoder + transfer function + decoder.

For a target T, the encoder is from source S, but would the decoder structure change? By the way, do you use the same transfer function in all targets?

Also, if outputs of two encoders are concatenated, how to resize 16 x 16 x 16 to 16 x 16 x 8?

Thank you!! Waiting for your reply :)

SOS from a newbie

Sorry for bothering you. I`m a postgraduate freshmen of medical statistics. My master's supervisor wish I can find more than one method of machine learning to predict prediction (for example death rate) using structured data(for example Red blood cell count). And he still want me to use transfer learning for increasing the efficiency and accuracy.

If you have time, please recommend some good models to me. Even better if you can introduce some open source projects in github.

Thanks a million for your help

Max retries exceeded with url: /jsonrpc (Caused by None)

Hello, when I try to download the dataset by omnitools.download all --components taskonomy --subset tiny --dest ./taskonomy_dataset/ --connections_total 40 --agree --name --email

it will give me this error, what should I do?

`Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=6800): Max retries exceeded with url: /jsonrpc (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x110b5e5b0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/local/lib/python3.9/site-packages/omnidata_tools/dataset/download.py", line 298, in process_model
raise e
File "/usr/local/lib/python3.9/site-packages/omnidata_tools/dataset/download.py", line 283, in process_model
tar_fpath = download_tar(
File "/usr/local/lib/python3.9/site-packages/omnidata_tools/dataset/download.py", line 160, in download_tar
res = aria2api.client.add_uri(uris=[url], options=options_dict)
File "/usr/local/lib/python3.9/site-packages/aria2p/client.py", line 482, in add_uri
return self.call(self.ADD_URI, params=[uris, options, position]) # type: ignore
File "/usr/local/lib/python3.9/site-packages/aria2p/client.py", line 262, in call
return self.res_or_raise(self.post(payload))
File "/usr/local/lib/python3.9/site-packages/aria2p/client.py", line 358, in post
return requests.post(self.server, data=payload, timeout=self.timeout).json()
File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 117, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=6800): Max retries exceeded with url: /jsonrpc (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x110b5e5b0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/bin/omnitools.download", line 8, in
sys.exit(download())
File "/usr/local/lib/python3.9/site-packages/fastcore/script.py", line 113, in _f
tfunc(**merge(args, args_from_prog(func, xtra)))
File "/usr/local/lib/python3.9/site-packages/omnidata_tools/dataset/download.py", line 304, in download
errors = list(tqdm.tqdm(p.imap(process_model, models), total=len(models)))
File "/usr/local/lib/python3.9/site-packages/tqdm/std.py", line 1180, in iter
for obj in iterable:
File "/usr/local/lib/python3.9/site-packages/multiprocess/pool.py", line 870, in next
raise value
requests.exceptions.ConnectionError: None: Max retries exceeded with url: /jsonrpc (Caused by None)

07/01 16:16:31 [NOTICE] Shutdown sequence commencing... Press Ctrl-C again for emergency shutdown.

Download Results:
gid |stat|avg speed |path/URI`

With great thanks

Valid classes for scene categorization

Hi, the supplementary materials of the Taskonomy paper mention that 63 categories from the MIT places dataset were used for training the scene categorization model.
I expected that the selected categories were contained in the following file './taskonomy/taskbank/lib/data/places_classes_to_keep.txt'
However this file only contains 31 valid classes, marked with a '1'.
Could you specify which classes you used for training the scene categorization model?
Thanks in advance.

Questions about the preprocessing for data

Hello,
I have some questions on some arguments that I cannot find the detail in the documentation.

For the data loading in this function, what's the meaning of the fixated condition, and how could we get this mean/std? Is this pre-calculated from the dataset?

if not raw:
if fixated:
std = np.asarray([ 10.12015407, 8.1103528, 1.09171896, 1.21579016, 0.26040945, 10.05966329])
mean = np.asarray([ -2.67375523e-01, -1.19147040e-02, 1.14497274e-02, 1.10903410e-03, 2.10509948e-02, -4.02013549e+00])
else:
mean = np.asarray([ -9.53197445e-03, -1.05196691e-03, -1.07545642e-02,
2.08785638e-02, -9.27858049e-02, -2.58052205e+00])
std = np.asarray([ 1.02316223, 0.66477511, 1.03806996, 5.75692889, 1.37604962,
7.43157247])
pose = (pose - mean)/std

And, when should we flip the XY-axis, what will happen if not flip in room layout task by your comment in this line? 🤔

def get_camera_matrix( view_dict, flip_xy=False ):
position = view_dict[ 'camera_location' ]
rotation_euler = view_dict[ 'camera_rotation_final' ]
R = transforms3d.euler.euler2mat( *rotation_euler, axes='sxyz' )
camera_matrix = transforms3d.affines.compose( position, R, np.ones(3) )
if flip_xy:
# For some reason the x and y are flipped in room layout
temp = np.copy(camera_matrix[0,:])
camera_matrix[0,:] = camera_matrix[1,:]
camera_matrix[1,:] = -temp
return camera_matrix

Code to generate dataset

Hey thanks for making all the running code available.
I have another dataset of RGB-D images and I would like to generate the ground truth for all the single image tasks you used. Is the code or method for that available somewhere? I couldn't find it on the paper nor references.

Thanks!

Missing domain names in sample data folder

I noticed several of the target domain names from the configs in taskbank/experiments/final refer to folders that are missing (or differently named) in the sample data folder and in data/README.md.

For example, curvature doesn't have a data folder. The code takes the value of cfg['target_domain_name'] directly and tries to access the folder of this name, which doesn't exist. So this gives an error while running training.

There are some domain names which have slightly different folder names. For example, segmentsemantic doesn't exist but segment_semantic exists. keypoint2d doesn't exist, but keypoints2d does.

Is there some preprocessing that needs to be done to this folder before running the code?

places_class_to_keep.txt seems stale

issue
It appears to be storing the information about which classes from places to keep for the classification task but it only has 31 non zero entries when the number of classification labels are 63, and there is a hard-coded select array in get_synset method which does have 63 ones to select the intended classes.

solution
either remove the file to avoid confusion or update the file

model weights are missing from S3 bucket?

When I run sh download_model.sh, I get the following response. It appears the models aren't present in the S3 bucket.

Downloading vanishing_point's model.index
--2019-09-12 08:43:35--  https://s3-us-west-2.amazonaws.com/taskonomy-unpacked-oregon/model_log_final/vanishing_point/logs/model.permanent-ckpt.index
Resolving s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)... 52.218.144.68
Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|52.218.144.68|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2019-09-12 08:43:36 ERROR 404: Not Found.

testing pretrained model - depth

image

I'm using the pretrained model in folder "rgb2depth". I want to reproduce "loss = 0.35". Which data should I use?

I've tried the "depth_zbuffer" test data, but "l1_loss, loss_g, loss_d_real, loss_d_fake" are around "0.6,0.7,0.9,0.7".

I suppose that I used wrong data or wrong loss... should I use "depth_eucliean" data?

Thank you!

Access to Mesh

Hi, thank you for the great work and dataset!

I was wondering if we do have access to the ground truth mesh?

Euclidean Distance ?

How can I do Euclidean distance estimation?
Rgb2depth and rgb2mist both estimate 0~1 depth values.

about taskonomy

The taskonomy(tiny splits) I downloaded have damaged images. And I try to download it again, but it is the same. My download path is https://datasets.epfl.ch/taskonomy/links.txt. The images newfields/rgb/point_1070_view_8_domain_rgb.png、muleshoe/rgb/point_399_view_5_domain_rgb.png 、woodbine_point_1096_view_6_domain_rgb.png I downloaded is as follows.I want to confirm if it is damaged.
If it is damaged, can you share it with me ?
image
image

Unable to download the dataset.

Hello, thank you for your wonderful research and generous open-source contribution. May I ask which tutorial should I refer to for downloading the dataset and whether registration is required? I followed the link in your repository (https://github.com/StanfordVL/taskonomy/tree/master/data) to download the dataset, and used the following command:

sudo apt-get install aria2
pip install omnidata-tools
omnitools.download all --components taskonomy --subset fullplus
--dest ./taskonomy_dataset/
--connections_total 40 --agree

Error is following:

[LICENSE] Before continuing the download, please review the terms of use for each of the following component datasets:
[LICENSE] omnidata: https://raw.githubusercontent.com/EPFL-VILAB/omnidata-tools/main/LICENSE
[LICENSE] taskonomy: https://raw.githubusercontent.com/StanfordVL/taskonomy/master/data/LICENSE
Traceback (most recent call last):
File "/home/chenjiaqi/anaconda3/envs/mmseg/bin/omnitools.download", line 8, in
sys.exit(download())
File "/home/chenjiaqi/anaconda3/envs/mmseg/lib/python3.8/site-packages/fastcore/script.py", line 119, in _f
return tfunc(**merge(args, args_from_prog(func, xtra)))
File "/home/chenjiaqi/anaconda3/envs/mmseg/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 260, in download
licenses_clickthrough(components, require_prompt=not agree_all, component_to_license=component_to_license, email=email, name=name)
File "/home/chenjiaqi/anaconda3/envs/mmseg/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 76, in licenses_clickthrough
if not (name and email_valid(email)): raise ValueError("In order to use --agree_all you must also supply a name and valid email through the args --name NAME and --email USER@DOMAIN)")
ValueError: In order to use --agree_all you must also supply a name and valid email through the args --name NAME and --email USER@DOMAIN)

Thank you!

transfer function setting

According to the supplementary material, your transfer function is composed with 3 layers: (1) clip+norm (2) dilated conv (3) dilated conv

But in this way, transfer function would change the size of the representation (16 x 16 x 8). Would you change decoder to concatenate with the output of transfer function?

Thank you!!

Download never ends?

My DL was interrupted due to a disconnection and I restarted it afterwards. I noticed that it keeps downloading the same folders and now the dataset is around 16TB, which is more than what it should be, right? Something seems to be off.

I only need rgb, zdepth_buffer and segment_semantic. I shouldn't be downloading this whole chunk. There is a command for downloading specific folders yes I know, but that also re-downloads what's already downloaded. Pretty annoying.

Any work around?

Access to dataset

Hi! I'm so excited of your work and dataset! Actually, I'm a bit confused about how to contact the authors. I was wondering how to get access to the dataset? Sorry, that I'm asking question here, I didn't find instructions here as it described in this issue. Thank you!

How could I get the entire datasets?

I notice that

More of code, models, and dataset of Taskonomy coming soon. 

in readme have been commented. Does it mean the dataset will not be published?

Thanks a lot.

camera parameters

Thank you for sharing your wonderful work.
I now plan to conduct experiments using the taskonomy dataset, but I cannot find the camera parameters, can you provide the camera parameters?

Transferring knowledge to my own dataset

Hi, I read your paper recently and it's really brilliant!

Now I have a dataset labelled with object classes and semantic segmentation mask, say this is the target task on my dataset. The classes will be different from your pre-trained models. I want to do transfer learning using your task-bank as source task(taking the encoder parts) and train my model. Could you please tell me how to do that?

Error when downloading TinyTaskonomy

Hi All,
I followed the README file and run the following command after succecfully installing aria2 and omnidata-tools.
It seems like for some files the connection to the server is broken.
Here is the trace:

[DL:97MiB][#dcfcb1 0B/7.4GiB(0%)][#a82352 450MiB/821MiB(54%)][#d9c894 335MiB/2.4GiB(13%)][#c72378 265MiB/1.1GiB(21%)]
11/01 17:36:58 [NOTICE] Download complete: compressed//segment_unsup25d__taskonomy__almena.tar
 *** Download Progress Summary as of Tue Nov  1 17:38:32 2022 ***
=====================================================================================================================
[#dcfcb1 0B/7.4GiB(0%) CN:1 DL:0B]
FILE: compressed//rgb__taskonomy__alfred.tar
---------------------------------------------------------------------------------------------------------------------
[#a82352 697MiB/821MiB(84%) CN:16 DL:0B]
FILE: compressed//depth_euclidean__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#d9c894 583MiB/2.4GiB(23%) CN:16 DL:0B]
FILE: compressed//edge_texture__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#c72378 502MiB/1.1GiB(41%) CN:16 DL:0B]
FILE: compressed//keypoints2d__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#abb076 856MiB/1.8GiB(46%) CN:16 DL:0B]
FILE: compressed//keypoints3d__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#5042be 1.1GiB/1.8GiB(60%) CN:16 DL:0B]
FILE: compressed//normal__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#4ec56e 456MiB/1.7GiB(25%) CN:16 DL:0B]
FILE: compressed//principal_curvature__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#c90ad6 358MiB/2.8GiB(12%) CN:16 DL:0B]
FILE: compressed//rgb__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#26a78e 770MiB/1.0GiB(71%) CN:16 DL:0B]
FILE: compressed//reshading__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#34d64e 0B/9.4GiB(0%) CN:1 DL:0B]
FILE: compressed//rgb_large__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#e8a3d4 0B/60MiB(0%) CN:1 DL:0B]
FILE: compressed//segment_unsup2d__taskonomy__almena.tar
---------------------------------------------------------------------------------------------------------------------
[#0efe21 0B/55MiB(0%) CN:1 DL:0B]
FILE: compressed//class_object__taskonomy__almota.tar
---------------------------------------------------------------------------------------------------------------------
[#bb4e30 0B/2.7GiB(0%) CN:1 DL:0B]
FILE: compressed//edge_texture__taskonomy__almota.tar
---------------------------------------------------------------------------------------------------------------------

[DL:0B][#dcfcb1 0B/7.4GiB(0%)][#a82352 697MiB/821MiB(84%)][#d9c894 583MiB/2.4GiB(23%)][#c72378 502MiB/1.1GiB(41%)][#a[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_keypoints3d.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_rgb_large.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_edge_texture.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_depth_euclidean.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almota_edge_texture.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_rgb.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_normal.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_keypoints2d.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almota_class_object.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_reshading.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/alfred_rgb.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_principal_curvature.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/almena_segment_unsup2d.tar (stacktrace below)
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/opt/conda/lib/python3.8/http/client.py", line 1344, in getresponse
    response.begin()
  File "/opt/conda/lib/python3.8/http/client.py", line 307, in begin
    version, status, reason = self._read_status()
  File "/opt/conda/lib/python3.8/http/client.py", line 268, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/opt/conda/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
    resp = conn.urlopen(
  File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/opt/conda/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/opt/conda/lib/python3.8/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='localhost', port=6800): Read timed out. (read timeout=60.0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/opt/conda/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 298, in process_model
    raise e
  File "/opt/conda/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 283, in process_model
    tar_fpath = download_tar(
  File "/opt/conda/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 161, in download_tar
    success = wait_on(aria2api, res)
  File "/opt/conda/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 187, in wait_on
    while not (a2api.get_downloads([gid])[0].is_complete or a2api.get_downloads([gid])[0].has_failed):
  File "/opt/conda/lib/python3.8/site-packages/aria2p/api.py", line 298, in get_downloads
    downloads.append(Download(self, self.client.tell_status(gid)))
  File "/opt/conda/lib/python3.8/site-packages/aria2p/client.py", line 877, in tell_status
    return self.call(self.TELL_STATUS, [gid, keys])  # type: ignore
  File "/opt/conda/lib/python3.8/site-packages/aria2p/client.py", line 262, in call
    return self.res_or_raise(self.post(payload))
  File "/opt/conda/lib/python3.8/site-packages/aria2p/client.py", line 358, in post
    return requests.post(self.server, data=payload, timeout=self.timeout).json()
  File "/opt/conda/lib/python3.8/site-packages/requests/api.py", line 119, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
  File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/requests/adapters.py", line 529, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=6800): Read timed out. (read timeout=60.0)
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/bin/omnitools.download", line 8, in <module>
    sys.exit(download())
  File "/opt/conda/lib/python3.8/site-packages/fastcore/script.py", line 119, in _f
    return tfunc(**merge(args, args_from_prog(func, xtra)))
  File "/opt/conda/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 304, in download
    errors = list(tqdm.tqdm(p.imap(process_model, models), total=len(models)))
  File "/opt/conda/lib/python3.8/site-packages/tqdm/_tqdm.py", line 1060, in __iter__
    for obj in iterable:
  File "/opt/conda/lib/python3.8/site-packages/multiprocess/pool.py", line 868, in next
    raise value
requests.exceptions.ReadTimeout: None: None
  1%|▉                                                                          | 137/10600 [05:12<6:37:42,  2.28s/it]
root@a804f8c0dde0:/images#
11/01 17:38:41 [NOTICE] Shutdown sequence commencing... Press Ctrl-C again for emergency shutdown.
[DL:50KiB][#dcfcb1 0B/7.4GiB(0%)][#a82352 697MiB/821MiB(84%)][#d9c894 583MiB/2.4GiB(23%)][#c72378 502MiB/1.1GiB(41%)]
11/01 17:38:44 [NOTICE] Download GID#dcfcb1a904b54c49 not complete: compressed//rgb__taskonomy__alfred.tar

11/01 17:38:44 [NOTICE] Download GID#a82352ebd95c4aec not complete: compressed//depth_euclidean__taskonomy__almena.tar

11/01 17:38:44 [NOTICE] Download GID#d9c89480fd02a3cb not complete: compressed//edge_texture__taskonomy__almena.tar

11/01 17:38:44 [NOTICE] Download GID#c723789b4a0b6657 not complete: compressed//keypoints2d__taskonomy__almena.tar

11/01 17:38:44 [NOTICE] Download GID#abb076fee190d5d8 not complete: compressed//keypoints3d__taskonomy__almena.tar

11/01 17:38:44 [NOTICE] Download GID#5042be4145f66b55 not complete: compressed//normal__taskonomy__almena.tar

11/01 17:38:44 [NOTICE] Download GID#4ec56efd51f8b934 not complete: compressed//principal_curvature__taskonomy__almena.tar

11/01 17:38:44 [NOTICE] Download GID#c90ad6e2c8686ae1 not complete: compressed//rgb__taskonomy__almena.tar

11/01 17:38:44 [NOTICE] Download GID#26a78e95921374b3 not complete: compressed//reshading__taskonomy__almena.tar

Download Results:
gid   |stat|avg speed  |path/URI
======+====+===========+=======================================================
dcfcb1|INPR|       0B/s|compressed//rgb__taskonomy__alfred.tar
a82352|INPR|   2.7MiB/s|compressed//depth_euclidean__taskonomy__almena.tar
d9c894|INPR|   2.4MiB/s|compressed//edge_texture__taskonomy__almena.tar
c72378|INPR|   2.4MiB/s|compressed//keypoints2d__taskonomy__almena.tar
abb076|INPR|   2.9MiB/s|compressed//keypoints3d__taskonomy__almena.tar
5042be|INPR|   3.6MiB/s|compressed//normal__taskonomy__almena.tar
4ec56e|INPR|   2.5MiB/s|compressed//principal_curvature__taskonomy__almena.tar
c90ad6|INPR|   2.4MiB/s|compressed//rgb__taskonomy__almena.tar
26a78e|INPR|   2.8MiB/s|compressed//reshading__taskonomy__almena.tar

Status Legend:
(INPR):download in-progress.

aria2 will resume download if the transfer is restarted.
If there are any errors, then see the log file. See '-l' option in help/man page for details.

For example #dcfcb1 remains on 0MB while other files been downloaded. I must say I tried to download the dataset on multiple machines and still same error came up.

Please help.
@alexsax
@amir32002

AHP?

Would you like to ask whether the similarity matrix you obtained is complex?

Dataset download in windows system

i try to download the dataset in windows system
Maybe the Installation with aria2 exists problems,the following error occurred
Has anyone encountered this problem before?
1711952523005

Implementation of the BIP solver

Hi,

In code/README.md it mentions some notebooks used for developing. Specifically I'm interested in the implementation of the BIP solver, which is supposed to be included in code/notebooks/analysis/ as stated in the README file, but I cannot found it. Could you provide the code? It would be great to see the implementation.

Many thanks!

About the Camera Parameters

Recently, I'm intend to use your datasets to train my model, but I need to know the exact number of #Camera Parameters#, so, I want to ask that how you set it in your experiment?
Looking forward to your reply.

depth_zbuffer

Very nice job and dataset !

But, how to get absolute depth values from depth_zbuffer in meters?

Thx!

Asking for the transfer net

I would like to learn the design of your transfer net. But in your published code, the "TransferNet" part seems to not exist. Could you please publish it ? Thank you very much!

Turning Taskonomy into a Multitask Network

I would like to use a shared encoder network trunk and different decoders to make a multitask network using taskonomy as a framework.

  1. Can this be feasibly done using the taskonomy project? (the codebase is large and I want to be sure before I attempt to work with it)
  2. I don't see the scripts to generate configs and I'm wondering where those are, as well as code/experiments/ and code/notebooks/ which seem like they seem like they might be helpful in seeing how to train the network.

Thanks!

Error while downloading the dataset

I have entered --name and --email parameters but still got this issue. Below is the terminal log.

Downloads Omnidata starter dataset.
--- in
Downloads Omnidata starter dataset.
---...
else: warn(msg)
[HEADER] -------------------------------------
[HEADER] From SERVERS: (using checksum validation: False)
[HEADER] https://datasets.epfl.ch/omnidata//links.txt
[HEADER] https://datasets.epfl.ch/taskonomy//links.txt
[HEADER]
[HEADER] Data parameters: (what to download)
[HEADER] Domains = ['rgb', 'normals', 'point_info']
[HEADER] Components = ['replica', 'taskonomy']
[HEADER] Subset = debug
[HEADER] Split = all
[HEADER]
[HEADER] Data locations:
[HEADER] Dataset (extracted) = ./omnidata_starter_dataset/
[HEADER] Compressed files = compressed/
[HEADER] -------------------------------------

[LICENSE] Before continuing the download, please review the terms of use for each of the following component datasets:
[LICENSE] taskonomy: https://raw.githubusercontent.com/StanfordVL/taskonomy/master/data/LICENSE
[LICENSE] replica: https://raw.githubusercontent.com/facebookresearch/Replica-Dataset/main/LICENSE
[LICENSE] omnidata: https://raw.githubusercontent.com/EPFL-VILAB/omnidata-tools/main/LICENSE
dung [email protected]
[NOTICE] Confirmation supplied by option '--agree_all'

[NOTICE] Opening aria2c download daemon in background: Run 'aria2p' in another window to view status.

11/14 07:40:35 [WARN] Neither --rpc-secret nor a combination of --rpc-user and --rpc-passwd is set. This is insecure. It is extremely recommended to specify --rpc-secret with the adequate secrecy or now deprecated --rpc-user and --rpc-passwd.

11/14 07:40:35 [NOTICE] IPv4 RPC: listening on TCP port 6800
['rgb', 'normals', 'point_info']
[NOTICE] Filtered down to 4 models based on specified criteria.
[NOTICE] Found 4 matching blobs on remote serverss.
0%| | 0/4 [00:00<?, ?it/s][FAILURE] Uncaught error when processing model https://datasets.epfl.ch/omnidata/omnidata_tars/point_info/replica/point_info-replica-frl_apartment_0.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/omnidata/omnidata_tars/rgb/replica/rgb-replica-frl_apartment_0.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/allensville_rgb.tar (stacktrace below)
[FAILURE] Uncaught error when processing model https://datasets.epfl.ch/taskonomy/allensville_point_info.tar (stacktrace below)
Exception in thread Thread-3 (_handle_results):
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
self.run()
File "/root/miniconda3/lib/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/root/miniconda3/lib/python3.11/site-packages/multiprocess/pool.py", line 579, in _handle_results
task = get()
^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/multiprocess/connection.py", line 253, in recv
return _ForkingPickler.loads(buf.getbuffer())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/dill/_dill.py", line 301, in loads
return load(file, ignore, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/dill/_dill.py", line 287, in load
return Unpickler(file, ignore=ignore, **kwds).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/dill/_dill.py", line 442, in load
obj = StockUnpickler.load(self)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/requests/exceptions.py", line 41, in init
CompatJSONDecodeError.init(self, *args)
TypeError: JSONDecodeError.init() missing 2 required positional arguments: 'doc' and 'pos'

Network is unreachable

Hi!

I am trying to download the dataset. But I encountered the following errors even when I download the tiny subset. I am in China, so I don't know if it is my own problem, and I want to know how to solve it.

Traceback (most recent call last):
  File "/home/xdl/miniconda3/lib/python3.8/urllib/request.py", line 1350, in do_open
    h.request(req.get_method(), req.selector, req.data, headers,
  File "/home/xdl/miniconda3/lib/python3.8/http/client.py", line 1240, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/xdl/miniconda3/lib/python3.8/http/client.py", line 1286, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/home/xdl/miniconda3/lib/python3.8/http/client.py", line 1235, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/home/xdl/miniconda3/lib/python3.8/http/client.py", line 1006, in _send_output
    self.send(msg)
  File "/home/xdl/miniconda3/lib/python3.8/http/client.py", line 946, in send
    self.connect()
  File "/home/xdl/miniconda3/lib/python3.8/http/client.py", line 1402, in connect
    super().connect()
  File "/home/xdl/miniconda3/lib/python3.8/http/client.py", line 917, in connect
    self.sock = self._create_connection(
  File "/home/xdl/miniconda3/lib/python3.8/socket.py", line 808, in create_connection
    raise err
  File "/home/xdl/miniconda3/lib/python3.8/socket.py", line 796, in create_connection
    sock.connect(sa)
OSError: [Errno 101] Network is unreachable

Unable to download dataset

I'm facing problem to download the dataset.
I used the following command:

sudo apt-get install aria2
pip install omnidata-tools
omnitools.download all --components taskonomy --subset fullplus --dest ./taskonomy_dataset/ --connections_total 40 --agree

Error is following:

/home/hafizur/anaconda3/lib/python3.9/site-packages/fastcore/docscrape.py:225: UserWarning: potentially wrong underline length... 
Downloads Omnidata starter dataset. 
--- in 
Downloads Omnidata starter dataset.
---...
  else: warn(msg)
[HEADER] -------------------------------------
[HEADER] From SERVERS: (using checksum validation: False)
[HEADER]     https://datasets.epfl.ch/omnidata//links.txt
[HEADER]     https://datasets.epfl.ch/taskonomy//links.txt
[HEADER] 
[HEADER] Data parameters: (what to download)
[HEADER]     Domains    = ['all']
[HEADER]     Components = ['taskonomy']
[HEADER]     Subset     = fullplus
[HEADER]     Split      = all
[HEADER] 
[HEADER] Data locations:
[HEADER]     Dataset (extracted)      = ./taskonomy_dataset/
[HEADER]     Compressed files         = compressed/
[HEADER] -------------------------------------


[LICENSE] Before continuing the download, please review the terms of use for each of the following component datasets:
[LICENSE]     taskonomy: https://raw.githubusercontent.com/StanfordVL/taskonomy/master/data/LICENSE
[LICENSE]     omnidata: https://raw.githubusercontent.com/EPFL-VILAB/omnidata-tools/main/LICENSE
Traceback (most recent call last):
  File "/home/hafizur/anaconda3/bin/omnitools.download", line 8, in <module>
    sys.exit(download())
  File "/home/hafizur/anaconda3/lib/python3.9/site-packages/fastcore/script.py", line 119, in _f
    return tfunc(**merge(args, args_from_prog(func, xtra)))
  File "/home/hafizur/anaconda3/lib/python3.9/site-packages/omnidata_tools/dataset/download.py", line 260, in download
    licenses_clickthrough(components, require_prompt=not agree_all, component_to_license=component_to_license, email=email, name=name)
  File "/home/hafizur/anaconda3/lib/python3.9/site-packages/omnidata_tools/dataset/download.py", line 76, in licenses_clickthrough
    if not (name and email_valid(email)): raise ValueError("In order to use --agree_all you must also supply a name and valid email through the args --name NAME and --email USER@DOMAIN)")
ValueError: In order to use --agree_all you must also supply a name and valid email through the args --name NAME and --email USER@DOMAIN) 

input image cropped and overwritten

scipy.misc.toimage(np.squeeze(img), cmin=0.0, cmax=1.0).save(args.im_name)

Is that intentional? If not, how about saving the cropped image file in a temp folder to use later?

    # save the cropped image in temp folder to prevent overwriting
    img_name = os.path.basename(args.im_name)
    name, ext = os.path.splitext(img_name)
    args.im_name = os.path.join('/tmp/', name + '_cropped' + ext)
    scipy.misc.toimage(np.squeeze(img), cmin=0.0, cmax=1.0).save(args.im_name)

how to generate win rates and affinities

Hi, thanks for the excellent work.

I have tried your models and would like to generate the win rates and affinities. In taskonomy/code/README.md, it says that "Generate win rates and affinities with one of the methods in analysis/", but there is no "analysis" folder. It also says that taskonomy/code has the following structure. But there is no "notebooks" folder.

Screenshot from 2020-10-13 11-11-12

Could you please add that?

Thanks

Download fails due to "unexpected end of data"

Thank you in advance for your help :)

My attempt to download the dataset fails after 7%.

Executing this command:
sudo apt-get install aria2 pip install omnidata-tools omnitools.download all --components taskonomy --subset fullplus \ --dest ./taskonomy_dataset/ \ --connections_total 40 --agree

The error line:
[FAILURE] Failure when processing model https://datasets.epfl.ch/taskonomy/ballou_class_scene.tar

Full-stack of error:

multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 257, in process_model
raise e
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 253, in process_model
untar(tar_fpath, dest=dest, model=model, ignore_existing=ignore_existing, dryrun=dryrun, output_structure=output_structure)
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 177, in untar
tar.extractall(path=tmpdirname)
File "/usr/lib/python3.8/tarfile.py", line 2026, in extractall
self.extract(tarinfo, path, set_attrs=not tarinfo.isdir(),
File "/usr/lib/python3.8/tarfile.py", line 2067, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name),
File "/usr/lib/python3.8/tarfile.py", line 2139, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib/python3.8/tarfile.py", line 2188, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/usr/lib/python3.8/tarfile.py", line 255, in copyfileobj
raise exception("unexpected end of data")
tarfile.ReadError: unexpected end of data
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/shared/venvs/py3.8-torch1.7.1/bin/omnitools.download", line 8, in
sys.exit(download())
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/fastcore/script.py", line 112, in _f
tfunc(**merge(args, args_from_prog(func, xtra)))
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/omnidata_tools/dataset/download.py", line 263, in download
r = list(tqdm.tqdm(p.imap(process_model, models), total=len(models)))
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/tqdm/std.py", line 1178, in iter
for obj in iterable:
File "/shared/venvs/py3.8-torch1.7.1/lib/python3.8/site-packages/multiprocess/pool.py", line 868, in next
raise value
tarfile.ReadError: unexpected end of data

question about "generalization to Novel Tasks"

In your paper, the section "generalization to Novel Tasks" take one of your 26 tasks as target-only task to show the performance of taskonomy on novel tasks.

But I'm wondering why don't you choose a totally new task (not within the 26 tasks)? And what's the difference between this section and the previous depicted section?

I thought when you generate the taskonomy map, you would take one task as target and the other 25 as source.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.