Git Product home page Git Product logo

alfred's Introduction

ALFRED

A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk,
Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox
CVPR 2020

ALFRED (Action Learning From Realistic Environments and Directives), is a new benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. Long composition rollouts with non-reversible state changes are among the phenomena we include to shrink the gap between research benchmarks and real-world applications.

For the latest updates, see: askforalfred.com

What more? Checkout ALFWorld – interactive TextWorld environments for ALFRED scenes!

Quickstart

Clone repo:

$ git clone https://github.com/askforalfred/alfred.git alfred
$ export ALFRED_ROOT=$(pwd)/alfred

Install requirements:

$ virtualenv -p $(which python3) --system-site-packages alfred_env # or whichever package manager you prefer
$ source alfred_env/bin/activate

$ cd $ALFRED_ROOT
$ pip install --upgrade pip
$ pip install -r requirements.txt

Download Trajectory JSONs and Resnet feats (~17GB):

$ cd $ALFRED_ROOT/data
$ sh download_data.sh json_feat

Train models:

$ cd $ALFRED_ROOT
$ python models/train/train_seq2seq.py --data data/json_feat_2.1.0 --model seq2seq_im_mask --dout exp/model:{model},name:pm_and_subgoals_01 --splits data/splits/oct21.json --gpu --batch 8 --pm_aux_loss_wt 0.1 --subgoal_aux_loss_wt 0.1

More Info

  • Dataset: Downloading full dataset, Folder structure, JSON structure.
  • Models: Training and Evaluation, File structure, Pre-trained models.
  • Data Generation: Generation, Replay Checks, Data Augmentation (high-res, depth, segementation masks etc).
  • Errata: Updated numbers for Goto subgoal evaluation.
  • THOR 2.1.0 Docs: Deprecated documentation from Ai2-THOR 2.1.0 release.
  • FAQ: Frequently Asked Questions.

SOTA Models

Open-source models that outperform the Seq2Seq baselines from ALFRED:

Context-Aware Planning and Environment-Aware Memory for Instruction Following Embodied Agents
Byeonghwi Kim, Jinyeon Kim, Yuyeong Kim, Cheolhong Min, Jonghyun Choi
Paper, Code

Multi-Level Compositional Reasoning for Interactive Instruction Following
Suvaansh Bhambri*, Byeonghwi Kim*, Jonghyun Choi
Paper, Code

Agent with the Big Picture: Perceiving Surroundings for Interactive Instruction Following
Byeonghwi Kim, Suvaansh Bhambri, Kunal Pratap Singh, Roozbeh Mottaghi, Jonghyun Choi
Paper, Code

FILM: Following Instructions in Language with Modular Methods
So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, Ruslan Salakhutdinov
Paper, Code

A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution
Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, Yoav Artzi
Paper, Code

Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring
Yichi Zhang, Joyce Chai
Paper, Code

Episodic Transformer for Vision-and-Language Navigation
Alexander Pashevich, Cordelia Schmid, Chen Sun
Paper, Code

MOCA: A Modular Object-Centric Approach for Interactive Instruction Following
Kunal Pratap Singh*, Suvaansh Bhambri*, Byeonghwi Kim*, Roozbeh Mottaghi, Jonghyun Choi
Paper, Code

Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion
Alessandro Suglia, Qiaozi Gao, Jesse Thomason, Govind Thattai, Gaurav Sukhatme
Paper, Code

Contact Mohit to add your model here.

Prerequisites

  • Python 3
  • PyTorch 1.1.0
  • Torchvision 0.3.0
  • AI2THOR 2.1.0

See requirements.txt for all prerequisites

Hardware

Tested on:

  • GPU - GTX 1080 Ti (12GB)
  • CPU - Intel Xeon (Quad Core)
  • RAM - 16GB
  • OS - Ubuntu 16.04

Leaderboard

Run your model on test seen and unseen sets, and create an action-sequence dump of your agent:

$ cd $ALFRED_ROOT
$ python models/eval/leaderboard.py --model_path <model_path>/model.pth --model models.model.seq2seq_im_mask --data data/json_feat_2.1.0 --gpu --num_threads 5

This will create a JSON file, e.g. task_results_20191218_081448_662435.json, inside the <model_path> folder. Submit this JSON here: AI2 ALFRED Leaderboard. For rules and restrictions, see the getting started page.

Rules:

  1. You are only allowed to use RGB and language instructions (goal & step-by-step) as input for your agents. You cannot use additional depth, mask, metadata info etc. from the simulator on Test Seen and Test Unseen scenes. However, during training you are allowed to use additional info for auxiliary losses etc.
  2. During evaluation, agents are restricted to max_steps=1000 and max_fails=10. Do not change these settings in the leaderboard script; these modifications will not be reflected in the evaluation server.
  3. ❗Do not spam the leaderboard with repeated submissions (under different email accounts) in order to optimize on the test set. Fine-tuning should be done only on the validation set, NOT on the leaderboard test set.
  4. Pick a legible model name for the submission. Just "baseline" is not very descriptive.
  5. All submissions must be attempts to solve the ALFRED dataset.
  6. Answer the following questions in the description: a. Did you use additional sensory information from THOR as input, eg: depth, segmentation masks, class masks, panoramic images etc. during test-time? If so, please report them. b. Did you use the alignments between step-by-step instructions and expert action-sequences for training or testing? (no by default; the instructions are serialized into a single sentence)
  7. Share who you are: provide a team name and affiliation.
  8. (Optional) Share how you solved it: if possible, share information about how the task was solved. Link an academic paper or code repository if public.
  9. Only submit your own work: you may evaluate any model on the validation set, but must only submit your own work for evaluation against the test set.

Docker Setup

Install Docker and NVIDIA Docker.

Modify docker_build.py and docker_run.py to your needs.

Build

Build the image:

$ python scripts/docker_build.py 

Run (Local)

For local machines:

$ python scripts/docker_run.py
 
  source ~/alfred_env/bin/activate
  cd $ALFRED_ROOT

Run (Headless)

For headless VMs and Cloud-Instances:

$ python scripts/docker_run.py --headless 

  # inside docker
  tmux new -s startx  # start a new tmux session

  # start nvidia-xconfig
  sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024

  # start X server on DISPLAY 0
  # single X server should be sufficient for multiple instances of THOR
  sudo python ~/alfred/scripts/startx.py 0  # if this throws errors e.g "(EE) Server terminated with error (1)" or "(EE) already running ..." try a display > 0

  # detach from tmux shell
  # Ctrl+b then d

  # source env
  source ~/alfred_env/bin/activate
  
  # set DISPLAY variable to match X server
  export DISPLAY=:0

  # check THOR
  cd $ALFRED_ROOT
  python scripts/check_thor.py

  ###############
  ## (300, 300, 3)
  ## Everything works!!!

You might have to modify X_DISPLAY in gen/constants.py depending on which display you use.

Cloud Instance

ALFRED can be setup on headless machines like AWS or GoogleCloud instances. The main requirement is that you have access to a GPU machine that supports OpenGL rendering. Run startx.py in a tmux shell:

# start tmux session
$ tmux new -s startx 

# start X server on DISPLAY 0
# single X server should be sufficient for multiple instances of THOR
$ sudo python $ALFRED_ROOT/scripts/startx.py 0  # if this throws errors e.g "(EE) Server terminated with error (1)" or "(EE) already running ..." try a display > 0

# detach from tmux shell
# Ctrl+b then d

# set DISPLAY variable to match X server
$ export DISPLAY=:0

# check THOR
$ cd $ALFRED_ROOT
$ python scripts/check_thor.py

###############
## (300, 300, 3)
## Everything works!!!

You might have to modify X_DISPLAY in gen/constants.py depending on which display you use.

Also, checkout this guide: Setting up THOR on Google Cloud

Citation

If you find the dataset or code useful, please cite:

@inproceedings{ALFRED20,
  title ={{ALFRED: A Benchmark for Interpreting Grounded
           Instructions for Everyday Tasks}},
  author={Mohit Shridhar and Jesse Thomason and Daniel Gordon and Yonatan Bisk and
          Winson Han and Roozbeh Mottaghi and Luke Zettlemoyer and Dieter Fox},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2020},
  url  = {https://arxiv.org/abs/1912.01734}
}

License

MIT License

Change Log

14/10/2020:

  • Added errata for Goto subgoal evaluation.

28/10/2020:

  • Added --use_templated_goals option to train with templated goals instead of human-annotated goal descriptions.

26/10/2020:

  • Fixed missing stop-frame in Modeling Quickstart dataset (json_feat_2.1.0.zip).

07/04/2020:

  • Updated download links. Switched from Google Cloud to AWS. Old download links will be deactivated.

28/03/2020:

  • Updated the mask-interaction API to use IoU scores instead of max pixel count for selecting objects.
  • Results table in the paper will be updated with new numbers.

Contact

Questions or issues? Contact [email protected]

alfred's People

Contributors

anisha2102 avatar askforalfred avatar jaw-ahm avatar mohitshridhar avatar theshadow29 avatar unnat avatar ybisk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alfred's Issues

Panoramic Images

Hey,
Can you tell me how I can get Panoramic Images? (A nudge in the right direction)

Why does `feat_conv.pt` has 10 more frames than the number of images?

Hi. Thanks for the amazing repository.

I find that feat_conv.pt has 10 more frames than the image. For example,

for task=pick_cool_then_place_in_recep-LettuceSliced-None-DiningTable-17/trial_T20190909_070538_437648, there are 455 images in traj_data['images'], but feat_conv is of shape 465x512x7x7

Similarly for task=pick_two_obj_and_place-Newspaper-None-GarbageCan-218/trial_T20190907_225356_202464, there are 530 images in traj_data['images'] but feat_conv is of shape 540x512x7x7

Is there any particular reason why this is the case?

Data Generation

Hi,
Could you please explain how the training data is generated?

Particularly, how is the 'action' generated?

I would appreciate it if you can provide some details.

Normal training time?

Hi,

I am trying to walk through the project and retrain the model by myself.

My machine is GeForce RTX 2080ti with 11GB memory. The batch size is set to 4 (I tried 8, but it run into OOM after half epoch, I guess there are some very long data sequences.)

It takes more than 3 hours to finish an epoch. Is this normal?

module 'constants' has no attribute 'DETECTION_SCREEN_WIDTH'

I install the constants=0.6 version. But there is an error as module 'constants' has no attribute 'DETECTION_SCREEN_WIDTH'. Could you help me to solve this problem?

Saving to: exp/model/pm_and_subgoals_01
batch: 0%| | 0/2628 [00:00<?, ?it/s]
epoch: 0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "models/train/train_seq2seq.py", line 108, in
model.run_train(splits, optimizer=optimizer)
File "/home/weijiang/alfred/models/model/seq2seq.py", line 91, in run_train
for batch, feat in self.iterate(train, args.batch):
File "/home/weijiang/alfred/models/model/seq2seq.py", line 284, in iterate
feat = self.featurize(batch)
File "/home/weijiang/alfred/models/model/seq2seq_im_mask.py", line 127, in featurize
feat['action_low_mask'].append([self.decompress_mask(a['mask']) for a in ex['num']['action_low'] if a['mask'] is not None])
File "/home/weijiang/alfred/models/model/seq2seq_im_mask.py", line 127, in
feat['action_low_mask'].append([self.decompress_mask(a['mask']) for a in ex['num']['action_low'] if a['mask'] is not None])
File "/home/weijiang/alfred/models/model/seq2seq_im_mask.py", line 176, in decompress_mask
mask = np.array(decompress_mask(compressed_mask))
File "/home/weijiang/alfred/gen/utils/image_util.py", line 28, in decompress_mask
mask = np.zeros((constants.DETECTION_SCREEN_WIDTH, constants.DETECTION_SCREEN_HEIGHT))
AttributeError: module 'constants' has no attribute 'DETECTION_SCREEN_WIDTH'

Get different feature vectors when loading from feat_conv.pt and from resnet.featurize()

Hi,

I am doing this example:
full_2.1.0/train/pick_and_place_with_movable_recep-ButterKnife-Cup-SinkBasin-2/trial_T20190908_233322_447979/raw_images

For image 000000000.jpg

feat = resnet.featurize([Image.open(fname)], batch=1)
print(feat[0][:5,:5])

I get

tensor([[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0188, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.6659, 0.2334, 0.4881, 0.0220],
         [0.0000, 0.0278, 0.0000, 0.1306, 0.0000, 0.0177, 0.0000],
         [0.2511, 0.9387, 0.5719, 0.2475, 0.1024, 0.3862, 0.1884],
         [1.4310, 1.5767, 0.8272, 0.0000, 0.0000, 0.0000, 0.3523]],

        [[0.0000, 0.0549, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.1023, 0.2866, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.4572, 0.0000, 0.0000, 0.0000, 0.4733, 0.8688, 0.6622],
         [0.9684, 0.0573, 0.0000, 0.0000, 0.0489, 0.0000, 0.0655]],

        [[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],

        [[1.8689, 2.2256, 0.6508, 1.0258, 0.5759, 0.9021, 0.6726],
         [2.5713, 3.0914, 1.0797, 1.3719, 0.9788, 1.8322, 1.6944],
         [3.7268, 3.6742, 1.5358, 1.2200, 0.9661, 2.6235, 2.2480],
         [4.2898, 4.0467, 1.9082, 0.6326, 0.2264, 1.4761, 1.8504],
         [4.6841, 4.2882, 1.2279, 0.0133, 0.0000, 0.8532, 1.6886]],

        [[0.0000, 0.0000, 0.0000, 0.1095, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.3814, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.7656, 0.0000, 0.4508, 0.0000],
         [0.0000, 0.0000, 0.0000, 1.1697, 0.3166, 0.8373, 0.0000],
         [0.0000, 0.0000, 0.0052, 1.3592, 0.6858, 1.1441, 0.0000]]])

When using

x = torch.load("feat_conv.pt")
print(x[0][:5,:5])

I get

tensor([[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0987, 0.0000, 0.4179, 0.0000, 0.3932, 0.0267],
         [0.1942, 0.3280, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.4623, 1.1134, 0.7551, 0.0720, 0.0000, 0.2512, 0.1772],
         [1.6237, 1.7352, 1.0370, 0.0000, 0.0000, 0.0000, 0.2772]],

        [[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.1637, 0.0102, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.2434, 0.0000, 0.0000, 0.0000, 0.1376, 0.5788, 0.3524],
         [0.7411, 0.0000, 0.0000, 0.0000, 0.2674, 0.1301, 0.0218]],

        [[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],

        [[1.7606, 1.9361, 0.6170, 1.0957, 0.7715, 1.0960, 0.9179],
         [2.4290, 2.7654, 0.9339, 1.2526, 1.0871, 1.8147, 1.7731],
         [3.6967, 3.4739, 1.3849, 1.0991, 1.0435, 2.3166, 1.9403],
         [4.4716, 4.3128, 2.0116, 0.7454, 0.2938, 1.2445, 1.4761],
         [5.1107, 4.7808, 1.3802, 0.2358, 0.0000, 1.0597, 1.6283]],

        [[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.1443, 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000, 0.6003, 0.0000, 0.3137, 0.0000],
         [0.0000, 0.0000, 0.0000, 1.2218, 0.5109, 0.9440, 0.0120],
         [0.0000, 0.0000, 0.0878, 1.5063, 0.8553, 1.0575, 0.0000]]])

Run alfred on headless servers without root account

Hello there,

I'm trying to deploy the code on the headless servers that I don't have root access. The job is submitted to the servers via a job scheduler so that I even can't ssh to such servers.

I followed your guide in #29, but I got an error when running startx.py. It seems like that the execution needs the root privilege.
May you give me some hint how can I work around this problem?

Thank you a lot!

Below is the full output:

python startx.py
Starting X on DISPLAY=:0

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:61:0:0"
EndSection


Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    SubSection     "Display"
        Depth       24
        Virtual 1024 768
    EndSubSection
EndSection


Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:62:0:0"
EndSection


Section "Screen"
    Identifier     "Screen1"
    Device         "Device1"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    SubSection     "Display"
        Depth       24
        Virtual 1024 768
    EndSubSection
EndSection


Section "Device"
    Identifier     "Device2"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:177:0:0"
EndSection


Section "Screen"
    Identifier     "Screen2"
    Device         "Device2"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    SubSection     "Display"
        Depth       24
        Virtual 1024 768
    EndSubSection
EndSection


Section "Device"
    Identifier     "Device3"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:178:0:0"
EndSection


Section "Screen"
    Identifier     "Screen3"
    Device         "Device3"
    DefaultDepth    24
    Option         "AllowEmptyInitialConfiguration" "True"
    SubSection     "Display"
        Depth       24
        Virtual 1024 768
    EndSubSection
EndSection


Section "ServerLayout"
    Identifier     "Layout0"
    Screen 0 "Screen0" 0 0
    Screen 1 "Screen1" 0 0
    Screen 2 "Screen2" 0 0
    Screen 3 "Screen3" 0 0
EndSection

(EE) 
Fatal server error:
(EE) PAM authentication failed, cannot start X server.
	Perhaps you do not have console ownership?
(EE) 
(EE) 
Please consult the The X.Org Foundation support 
	 at http://wiki.x.org
 for help. 
(EE) 

Why repeating the same frame to predict <<stop>>?

Hi,

I have a question about this line.

feat['frames'].append(torch.cat([im, im[-1].unsqueeze(0)], dim=0)) # add stop frame

If I understand correctly, frames contains an image which conditions the prediction of action_low of the same index.
I assume that the im[-1] in this line is the image when executing the last actual action (e.g. Slice an object), but to predict the stop action, it would be natural to use a new image after the last actual action.
Why does it repeat the same frame, or is my understanding correct? Thanks!

Unable to reproduce result for validation set

Hi,
Thanks for the amazing dataset and for sharing your code.
I am unable to reproduce the results for validation set seen.
I downloaded the checkpoints as provided by you and I am using the best_seen.pth
I am getting SR 0.0097 and GC 0.0659 whereas the result on val seen in the paper is SR 0.037 and GC 0.1.

Could you point to any stuff I might have missed ?

For starting XServer I used
sudo nvidia-xconfig -a --use-display-device=None --virtual=1024x786
sudo /usr/bin/X :0 &

I face two warnings
UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed "

UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")

The second warning won't affect the results but I wanted to confirm if upsampling with align corners was intended or whether the warning appeared earlier too and I should ignore it ?

Torch/torchvision incompatibility in docker when running pretrained model

When running evaluation with the pretrained models (or training) inside the ALFRED docker ($ python3 models/eval/eval_seq2seq.py --model_path exp/model:seq2seq_im_mask,name:base30_pm010_sg010_01/best_seen.pth --eval_split valid_seen --data data/json_feat_2.1.0 --model models.model.seq2seq_im_mask --gpu --num_threads 1) there seems to be a torch/torchvision compatibility problem with drivers (which may be due to a wonky driver setup on my end).  If I'm running the pretrained model inside ai2thor-docker instead, I get the same error but I can update torch and torchvision to 1.6.0 and 0.7.0 respectively and the error goes away, leading to this issue where the Unity process crashes immediately due to a driver mismatch.

Currently having a bit of difficulty building the ALFRED docker with a newer python version (>= 3.6) that would allow me to upgrade torch and torchvision, but that's more of a "me" problem.

{'tests_seen': 1533,
 'tests_unseen': 1529,
 'train': 21023,
 'valid_seen': 820,
 'valid_unseen': 821}
Loading:  exp/model:seq2seq_im_mask,name:base30_pm010_sg010_01/best_seen.pth
Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /home/jzhanson/.cache/torch/checkpoints/resnet18-5c106cde.pth
100%|###############################################| 46827520/46827520 [00:00<00:00, 65572873.57it/s]
Traceback (most recent call last):
  File "models/eval/eval_seq2seq.py", line 54, in <module>
    eval = EvalTask(args, manager)
  File "/home/jzhanson/alfred/models/eval/eval.py", line 53, in __init__
    self.model = self.model.to(torch.device('cuda'))
  File "/home/jzhanson/alfred_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 386, in to
    return self._apply(convert)
  File "/home/jzhanson/alfred_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 193, in _apply
    module._apply(fn)
  File "/home/jzhanson/alfred_env/lib/python3.5/site-packages/torch/nn/modules/rnn.py", line 127, in _apply
    self.flatten_parameters()
  File "/home/jzhanson/alfred_env/lib/python3.5/site-packages/torch/nn/modules/rnn.py", line 123, in flatten_parameters
    self.batch_first, bool(self.bidirectional))
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

UPDATE: Unity process crashes with driver mismatch inside ai2thor-docker with startx.py, Ubuntu 18.04

I've been following along with #48 since I'm also trying to run ALFRED evaluation with THOR on a headless machine where I don't have root access. So far, I've modified the ai2thor-docker repo so that it installs ai2thor==2.1.0 (I had to also add RUN pip3 install --upgrade torch torchvision to the Dockerfile because there were some compatibility issues with the pytorch being 1.1.0 instead of 1.6.0 and torchvision being 0.3.0 instead of 0.7.0, since I was getting errors like

{'tests_seen': 1533,
 'tests_unseen': 1529,
 'train': 21023,
 'valid_seen': 820,
 'valid_unseen': 821}
Loading:  exp/model:seq2seq_im_mask,name:pm_and_subgoals_01/best_seen.pth
Traceback (most recent call last):
  File "/usr/lib/python3.6/tarfile.py", line 188, in nti
    s = nts(s, "ascii", "strict")
  File "/usr/lib/python3.6/tarfile.py", line 172, in nts
    return s.decode(encoding, errors)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xba in position 1: ordinal not in range(128)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/tarfile.py", line 2299, in next
    tarinfo = self.tarinfo.fromtarfile(self)
  File "/usr/lib/python3.6/tarfile.py", line 1093, in fromtarfile
    obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
  File "/usr/lib/python3.6/tarfile.py", line 1035, in frombuf
    chksum = nti(buf[148:156])
  File "/usr/lib/python3.6/tarfile.py", line 191, in nti
    raise InvalidHeaderError("invalid header")
tarfile.InvalidHeaderError: invalid header

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 556, in _load
    return legacy_load(f)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 467, in legacy_load
    with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, \
  File "/usr/lib/python3.6/tarfile.py", line 1591, in open
    return func(name, filemode, fileobj, **kwargs)
  File "/usr/lib/python3.6/tarfile.py", line 1621, in taropen
    return cls(name, mode, fileobj, **kwargs)
  File "/usr/lib/python3.6/tarfile.py", line 1484, in __init__
    self.firstmember = self.next()
  File "/usr/lib/python3.6/tarfile.py", line 2311, in next
    raise ReadError(str(e))
tarfile.ReadError: invalid header

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "models/eval/eval_seq2seq.py", line 54, in <module>
    eval = EvalTask(args, manager)
  File "/app/alfred/models/eval/eval.py", line 31, in __init__
    self.model, optimizer = M.Module.load(self.args.model_path)
  File "/app/alfred/models/model/seq2seq.py", line 318, in load
    save = torch.load(fsave)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 387, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 560, in _load
    raise RuntimeError("{} is a zip archive (did you mean to use torch.jit.load()?)".format(f.name))
RuntimeError: exp/model:seq2seq_im_mask,name:pm_and_subgoals_01/best_seen.pth is a zip archive (did you mean to use torch.jit.load()?)

).

I started by doing a pretty naive approach where I just moved my ALFRED repo with the quickstart data and the model checkpoints I wanted to evaluate into the Docker build context and copying all of it into the Docker image (which takes a while, but that's a "me" problem). Unfortunately, I get a bus error when attempting to run evaluation on my saved checkpoint, even if I generate the checkpoint by training inside the Docker container:

{'tests_seen': 1533,
 'tests_unseen': 1529,
 'train': 21023,
 'valid_seen': 820,
 'valid_unseen': 821}
Loading:  exp/model:seq2seq_im_mask,name:pm_and_subgoals_01/best_seen.pth
./test.sh: line 3:   117 Bus error               (core dumped) python3 models/eval/eval_seq2seq.py --model_path exp/model:seq2seq_im_mask,name:pm_and_subgoals_01/best_seen.pth --eval_split valid_seen --data data/json_feat_2.1.0 --model models.model.seq2seq_im_mask --gpu --num_threads 1

Update: Tried cloning the alfred repo and downloading the data from inside the docker and training from scratch, but same issue.

The reason I used torch==1.6.0 and torchvision==0.7.0 instead of torch==1.1.0 and torchvision==0.3.0 is that it silences the error

Traceback (most recent call last):
  File "models/train/train_seq2seq.py", line 103, in <module>
    model = model.to(torch.device('cuda'))
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 386, in to
    return self._apply(convert)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 193, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py", line 127, in _apply
    self.flatten_parameters()
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py", line 123, in flatten_parameters
    self.batch_first, bool(self.bidirectional))
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

I suspect the bus error has to do with the version differences, but I'm not quite sure yet.

Accessing frames in batches and selecting section of Natural language instruction for the input to the model.forward() function.

I had one question and a clarification regarding the code.

Question 1. We get frames (images), only when we interact with the environment, because the action we choose decides the frame we receive, but you seem to be loading the frames from the dataset. How can I know the frames before hand without executing my model output action in the environment?

Clarification: You mask your padded NL instructions using DotAttn() function (where in you multiply the language instruction with the previous hidden states), and then submit it as an input to your model.forward(). Is this a correct interpretation of how you decide which part of the language instruction is to be used for the next iteration?

Thanks

Note on Human Annotations

Hello, I've been going through the human annotations provided in the ALFRED corpus for my own purposes. And with the 589 I have collected, primarily from the "examine in light" data section (as I'm going sequentially through the training data) I have noticed a couple of things.

  1. The human annotators are often confused as to what the object of interest is in the scene. Often they refer to an object like a clock as "brown object" or "paper weight" etc.
  2. Far more descriptive instruction is provided for the navigation aspects of the dataset than the visual component of it
  3. There are strange characters left in some human annotations such as parenthesis and question marks when annotators are confused by what they are viewing
  4. There are several spelling errors such as "off" vs "of" and "close" vs "closet" vs "closest"
  5. Many high level annotations provided by AMTs show that the annotators themselves do not understand the task. In the examine task, there are quite a few summarizations of the task as "turn on the light" or awkward phrasing such as "carry the clock to the light" without realizing the task is to view the clock itself. Basically, a very unnatural way for humans to communicate goals to an agent.

This is a very preliminary study, and I understand noise is typical in datasets. I will update this thread (if there is interest) with more evaluations of this dataset noise and human focused specificity. But at the minimum, I wanted the authors to be aware of some of these aspects so that greater prefiltering may be applied before training the dataset. I will also admit that the "examine in light" task was still one of the best performing tasks in the dataset; however, if this noise and annotator confusion carries into more complex-longer horizon tasks, it could possibly contribute to such low performance of current models.

RAM Issue

HI, just to clarify is the quick-start training method loading the entire data-set into RAM. If this is the case, how would you suggest I run it if I have insufficient RAM?

Run with docker_enabled=True

Hi,

It seems the env.last_event.IMAGE_RELATED_ATTRIBUTES (e.g. frame) are None with docker_enabled=True. I am using the Dockerfile from ai2thor repo. I also tried the following code, the results are the same:

import ai2thor.controller
controller = ai2thor.controller.Controller()
controller.docker_enabled = True
controller.start()
# can be any one of the scenes FloorPlan###
controller.reset('FloorPlan28')
controller.step(dict(action='Initialize', gridSize=0.25))
controller.step(dict(action='RotateRight'))
controller.last_event.frame == None # True

I notice there is a TODO docker session in the README file and I was wondering how could I evaluate the model with docker?

Thanks.

How much storage space do Resnet Features need?

Hello~ I met some trouble and I need your help.
I have a limited storage space with my PC, so I download json data you offered. I have generated some images and then I want to get Resnet features using extract_resnet.py. But after extracting features in val_seen and val_unseen, all the feature.pt files have taken 14.8G,I made an estimation that the train data features may need 200G.
But you offered a 17G data including jsons and features, I don't know how much it takes after unzipping. Maybe it takes less than 200G?
Thanks for your help in advance!

The process is being "Killed"

Hi there,

After preprocessing, the process is being killed after a couple of warnings as follow:

warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")

Killed: 0%|▏ | 4/2628 [00:11<2:09:00, 2.95s/it]

Also, all the evaluations are being killed for both best_seen and best_unseen checkpoints:

{'tests_seen': 1533,
'tests_unseen': 1529,
'train': 21023,
'valid_seen': 820,
'valid_unseen': 821}
Loading: best_checkpoints/best_unseen.pth
Killed

{'tests_seen': 1533,
'tests_unseen': 1529,
'train': 21023,
'valid_seen': 820,
'valid_unseen': 821}
Loading: best_checkpoints/best_seen.pth
Killed

I was wondering somebody who've faced this issue could help. I've already installed all the dependencies as mentioned, as well as adding CUDA 10.0 for the project.

Questions about subgoal evaluation of validation set.

Hi, First thanks your awesome work.
I have encountered some problems when i evaluated the following pre-trained model.
image

environment

image

I have evaluated the model with following commands.

python3 models/eval/eval_seq2seq.py --model_path ./data/model\:seq2seq_im_mask\,name\:base30_pm010_sg010_01/best_seen.pth --eval_split valid_seen --data ./data/json_feat_2.1.0/ --model models.model.seq2seq_im_mask --gpu --num_threads 1 --subgoals all

Expected output:
image

My output:

Avg = 40.58
{'CleanObject': {'evals': 112,                                                                                                                                           
                 'sr': 0.17857142857142858,
                 'sr_plw': 0.17857142857142858,
                 'successes': 20},
 'CoolObject': {'evals': 132,
                'sr': 0.8333333333333334,
                'sr_plw': 0.8160280299410734,
                'successes': 110},
 'GotoLocation': {'evals': 2660,
                  'sr': 0.45338345864661656,
                  'sr_plw': 0.3228794854619464,
                  'successes': 1206},
 'HeatObject': {'evals': 107,
                'sr': 0.822429906542056,
                'sr_plw': 0.7851184079299491,
                'successes': 88},
 'PickupObject': {'evals': 1211,
                  'sr': 0.19240297274979357,
                  'sr_plw': 0.15914573049097747,
                  'successes': 233},
 'PutObject': {'evals': 1103,
               'sr': 0.642792384406165,
               'sr_plw': 0.5785606947178042,
               'successes': 709},
 'SliceObject': {'evals': 151,
                 'sr': 0.33112582781456956,
                 'sr_plw': 0.24598930481283418,
                 'successes': 50},
 'ToggleObject': {'evals': 94,
                  'sr': 0.18085106382978725,
                  'sr_plw': 0.16046099290780141,
                  'successes': 17}}

I am looking forward to your reply.

cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

Hi,

I'm seeing the same error as another person posted --

(alfred_env) (base) peter@neutronium:~/github/alfred$ python models/train/train_seq2seq.py --data data/json_feat_2.1.0 --model seq2seq_im_mask --dout exp/model:{model},name:pm_and_subgoals_01 --splits data/splits/oct21.json --gpu --batch 8 --pm_aux_loss_wt 0.1 --subgoal_aux_loss_wt 0.1 Namespace(action_loss_wt=1.0, actor_dropout=0.0, attn_dropout=0.0, batch=8, data='data/json_feat_2.1.0', dataset_fraction=0, dec_teacher_forcing=False, decay_epoch=10, demb=100, dframe=2500, dhid=512, dout='exp/model:seq2seq_im_mask,name:pm_and_subgoals_01', epoch=20, fast_epoch=False, gpu=True, hstate_dropout=0.3, input_dropout=0.0, lang_dropout=0.0, lr=0.0001, mask_loss_wt=1.0, model='seq2seq_im_mask', pframe=300, pm_aux_loss_wt=0.1, pp_folder='pp', preprocess=False, resume=None, save_every_epoch=False, seed=123, splits='data/splits/oct21.json', subgoal_aux_loss_wt=0.1, temp_no_history=False, vis_dropout=0.3, zero_goal=False, zero_instr=False) {'tests_seen': 1533, 'tests_unseen': 1529, 'train': 21023, 'valid_seen': 820, 'valid_unseen': 821} Traceback (most recent call last): File "models/train/train_seq2seq.py", line 103, in <module> model = model.to(torch.device('cuda')) File "/home/peter/github/alfred_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 386, in to return self._apply(convert) File "/home/peter/github/alfred_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 193, in _apply module._apply(fn) File "/home/peter/github/alfred_env/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 127, in _apply self.flatten_parameters() File "/home/peter/github/alfred_env/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 123, in flatten_parameters self.batch_first, bool(self.bidirectional)) RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

I have verified that I've followed the installation instructions, that that the correct versions of torch (1.1.0), Torchvision (0.3.0 in requirements.txt; the prose says 1.3.0 but the latest version is 0.6.0), AI2THOR (2.1.0), and tensorboardX (1.8) have been installed.

I'm using a Titan RTX and CUDA 10.1 on KUbuntu 18.04.

Model seems to start training without the --gpu option, but it appears slow (so I didn't wait to see how long it would take).

thanks!

First Time Cloud User Questions

Hello!

I decided to post here instead of emailing just in case there are other first time cloud users out there who also need some guidance.

I saw this link: How to RunAI2 Simulation with GCP in the Repo Description, and I was curious if the specifications they provided:

2 Tesla K80 GPU with a 12GB graphics memory
16 vCores CPU
40–50 GB storage
Image of Intel® optimized Deep Learning Image: Base m24 (with Intel® MKL and CUDA 10.0)

were the same used by you all. I was also wondering, as a student, how much running on GCP might have cost you all.

Alternatively, if you all didn't use a cloud service, what hardware did you use? I have a 1080 8GB NVIDIA graphics card, and I ran into issue with not enough GPU RAM. I'm sort of stumped on whether or not I should get better hardware or pay for hourly usage of GCP or AWS

Discrepancy between the number of instructions and high-level actions

Hi, thank you for developing this nice dataset!
When I was trying to align the instruction sentences and high-level actions, I found discrepancy between their numbers in the following files.

train pick_two_obj_and_place-LettuceSliced-None-Fridge-1/trial_T20190906_181830_873214
train pick_two_obj_and_place-PotatoSliced-None-GarbageCan-28/trial_T20190908_120151_167011
train pick_two_obj_and_place-TomatoSliced-None-Microwave-27/trial_T20190907_013546_073160
train pick_two_obj_and_place-PotatoSliced-None-GarbageCan-28/trial_T20190908_115507_503798
train pick_two_obj_and_place-AppleSliced-None-Microwave-21/trial_T20190909_045706_358954
valid_unseen pick_two_obj_and_place-AppleSliced-None-CounterTop-10/trial_T20190907_061009_396474

It seems that all of them are caused when slicing two objects at the end of the episode.
The second slice action is separated from the rest like this.

# Example
# index 10
Cut both apples inside the microwave into three parts.
{'action': 'OpenObject', 'objectId': 'Microwave|-02.01|+00.69|-03.69'}
{'action': 'SliceObject', 'objectId': 'Apple|-01.60|+00.75|-03.28'}
# index 11 (no instruction)
[{'action': 'SliceObject', 'objectId': 'Apple|-01.30|+00.11|-03.68'}, {'action': 'CloseObject', 'objectId': 'Microwave|-02.01|+00.69|-03.69'}]

This is the code that I used to detect the discrepancy just for your information.

import json
from pathlib import Path
data_path = "/Users/ryokan/Desktop/vqn_alfred/storage/json_feat_2.1.0"
splits_path = "data/splits/oct21.json"


splits = json.load(open(splits_path))
for k in ['train', 'valid_seen', 'valid_unseen']:
    for task in splits[k]:
        json_path = Path(data_path) / k / task['task'] / 'traj_data.json'
        example = json.load(open(json_path))
        instructions = [ann['high_descs'] for ann in example["turk_annotations"]['anns']]

        assert len(instructions[0]) == len(instructions[1]) == len(instructions[2])

        instruction_length = len(instructions[0])

        last_high_index = example["plan"]['low_actions'][-1]['high_idx']
        num_high_actions = last_high_index + 1

        # the number of high-level actions and the number of instructions should be the same
        if num_high_actions != instruction_length:
            print(k, task['task'])

So, I think this is something unexpected in the data creation pipeline?

question of weighted_mask_loss function

Hi,

The weighted_mask_loss function inside seq2seq_im_mask looks like:

    def weighted_mask_loss(self, pred_masks, gt_masks):
        '''
        mask loss that accounts for weight-imbalance between 0 and 1 pixels
        '''
        # pred_mask [batch * len, 1, 300, 300], gt_mask[batch * len, 1, 300, 300]
        bce = self.bce_with_logits(pred_masks, gt_masks)
        flipped_mask = self.flip_tensor(gt_masks)
        inside = (bce * gt_masks).sum() / (gt_masks).sum()
        outside = (bce * flipped_mask).sum() / (flipped_mask).sum()
        return inside + outside

However, the summation is batch-wise instead of sample-wise, which might be problematic. Besides, I was wondering whether it is necessary to have an average. Here is the revised code based on my understanding:

    def weighted_mask_loss(self, pred_masks, gt_masks):
        '''
        mask loss that accounts for weight-imbalance between 0 and 1 pixels
        '''
        # pred_mask [batch * len, 1, 300, 300], gt_mask[batch * len, 1, 300, 300]
        bce = self.bce_with_logits(pred_masks, gt_masks)
        flipped_mask = self.flip_tensor(gt_masks)
        bs = bce.shape[0]
        bce = bce.view(bs, -1)
        gt_masks = gt_masks.view(bs, -1)
        flipped_mask = flipped_mask.view(bs, -1)
        # instant-wised summation
        inside = torch.sum(bce * gt_masks, dim=1) / torch.sum(gt_masks,dim=1)
        outside = torch.sum(bce * flipped_mask, dim=1) / torch.sum(flipped_mask, dim=1)
        per_loss = inside + outside
        # average
        loss = torch.mean(per_loss)
        return loss, per_loss

Is my understanding correct? Thanks!

Does a low-level instruction correspond to a PDDL action?

Hi,
First of all, really really huge thanks for consistent replying.

I've found instructions in some json files (for both training (ann.json) and validation (trad_data.json)) align to sub-goals as below. Is it true that the alignment holds for all trajectories in ALFRED?

Sub-goals in ("high_pddl" in "plan")
subgoals = {"GotoLocation", "PickupObject", "GotoLocation", "ToggleObject", "NoOp"}

Instructions in ("high_descs" in "turk_annotations")
instructions = {"go to ~~~", "pick up ~~~", "take something to somewhere", "turn on ~~~"}

In other words, I wonder if it's possible to figure out which sub-goal each instruction belongs to using the same index for all trajectories in ALFRED.

if subgoals[i] == "GotoLocation":
    instructions[i] is a "Goto" task. (should be)
else
    instructions[i] is an interaction task. (should be)

Again, huge thanks!

Can't run startx.py inside docker — requires lspci (fixed) and Xorg (UPDATE: fixed)

When I try to run startx.py inside the docker environment, it tells me that there's no lspci command:

Traceback (most recent call last):
  File "alfred/scripts/startx.py", line 97, in <module>
    startx(display)
  File "alfred/scripts/startx.py", line 72, in startx
    for r in pci_records():
  File "alfred/scripts/startx.py", line 15, in pci_records
    output = subprocess.check_output(command).decode()
  File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.5/subprocess.py", line 693, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/usr/lib/python3.5/subprocess.py", line 947, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'lspci'

This is because the Dockerfile doesn't install pciutils under the apt-get install call. After adding that, startx.py complains about not having Xorg:

Traceback (most recent call last):
  File "startx.py", line 97, in <module>
    startx(display)
  File "startx.py", line 86, in startx
    subprocess.call(command)
  File "/usr/lib/python3.5/subprocess.py", line 557, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/usr/lib/python3.5/subprocess.py", line 947, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'Xorg'

But when I add xserver-xorg-core and xorg to the apt-get install call in the Dockerfile, scripts/docker_build.py prompts me for the keyboard layout and then freezes/hangs indefinitely:

Configuring keyboard-configuration
----------------------------------

The layout of keyboards varies per country, with some countries having multiple
common layouts. Please select the country of origin for the keyboard of this
computer.

  1. Afghani                                     48. Irish
  2. Albanian                                    49. Italian
  3. Amharic                                     50. Japanese
  4. Arabic                                      51. Japanese (PC-98xx Series)
  5. Arabic (Morocco)                            52. Kazakh
  6. Arabic (Syria)                              53. Khmer (Cambodia)
  7. Armenian                                    54. Korean
  8. Azerbaijani                                 55. Kyrgyz
  9. Bambara                                     56. Lao
  10. Bangla                                     57. Latvian
  11. Belarusian                                 58. Lithuanian
  12. Belgian                                    59. Macedonian
  13. Bosnian                                    60. Maltese
  14. Braille                                    61. Maori
  15. Bulgarian                                  62. Moldavian
  16. Burmese                                    63. Mongolian
  17. Chinese                                    64. Montenegrin
  18. Croatian                                   65. Nepali
  19. Czech                                      66. Norwegian
  20. Danish                                     67. Persian
  21. Dhivehi                                    68. Polish
  22. Dutch                                      69. Portuguese
  23. Dzongkha                                   70. Portuguese (Brazil)
  24. English (Cameroon)                         71. Romanian
  25. English (Ghana)                            72. Russian
  26. English (Nigeria)                          73. Serbian
  27. English (South Africa)                     74. Sinhala (phonetic)
  28. English (UK)                               75. Slovak
  29. English (US)                               76. Slovenian
  30. Esperanto                                  77. Spanish
  31. Estonian                                   78. Spanish (Latin American)
  32. Faroese                                    79. Swahili (Kenya)
  33. Filipino                                   80. Swahili (Tanzania)
  34. Finnish                                    81. Swedish
  35. French                                     82. Switzerland
  36. French (Canada)                            83. Taiwanese
  37. French (Democratic Republic of the Congo)  84. Tajik
  38. French (Guinea)                            85. Thai
  39. Georgian                                   86. Tswana
  40. German                                     87. Turkish
  41. German (Austria)                           88. Turkmen
  42. Greek                                      89. Ukrainian
  43. Hebrew                                     90. Urdu (Pakistan)
  44. Hungarian                                  91. Uzbek
  45. Icelandic                                  92. Vietnamese
  46. Indian                                     93. Wolof
  47. Iraqi
Country of origin for the keyboard:

I've tried the inputs of "26", "English (US)", and "26. English (US)" but it freezes for all three of them.

ai2thor-docker doesn't have these problems and doesn't prompt for keyboard setup, despite also installing xserver-xorg-core and xorg in its Dockerfile.

Changing the first line of the ALFRED Dockerfile from FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 to FROM nvidia/cuda:11.0-devel-ubuntu18.04 as the ai2thor-docker does (and which I believe to be the correct versions for the machine I'm running on) doesn't seem to solve it.

postconditions_met for sliced objects

In ALL tasks involving a sliced object, the postcondition check while checking task success is:

if 'Sliced' in targets['object']:
    ts += 1
    if 'Sliced' in [p['objectId'] for p in pickupables]:
        s += 1

For example in L336 of tasks.py. Shouldn't these instances be

if np.any(['Sliced' in p['objectId'] for p in pickupables]):
    s += 1

There are 145 of these in valid_seen, which almost matches what this issue reports.

AI2-THOR 2.4.0

Hi there,

Thanks for the great work! Is there any plan to make this repo compatible with the newest AI2-THOR release? Thanks!

Human-in-the-loop with planner?

Is there a straightforward way to couple the existing PDDL code with the ability to interactively control the agent in an environment?

For example in a collaborative setting: I want to load an initial state, let the human explore/perform part of the task, then use the planner to finish/satisfy the high-level goal in case the human fails.

Parsing out w.r.t task type

Hello,

The ALFRED paper mentions 7 task types. Is there a way to identify all the trial_data.json files corresponding to just one of these task types? for example - If I am only interested in the JSON files corresponding to pick and place across all floorplans then is there a task type tag associated with each trial_data.json file? Thank you for your help!

Regards,
Adi

From which weight can I reproduce experimental results in the paper among three weights?

First of all, thanks for sharing the great code.

  1. Which weight should I use for validation and test dataset?
    I'm trying to reproduce numbers in the paper.
    The model is trained with alfred/models/README.md.
    After training, I've encountered three weights,
  • best_seen.pth
  • best_unseen.pth
  • latest.pth
  1. Which max_step value should I use?
    The paper said that max_step is set to 400, but README said it is set to 1000.
    I'm not sure which one is for paper.

Evaluation error

Hi, I am trying to run an evaluation on the checkpoint you provided. However, I get the following error:

Resetting ThorEnv
Process Process-2:
Traceback (most recent call last):
  File "/home/michas/anaconda3/envs/pytorch/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/home/michas/anaconda3/envs/pytorch/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/home/michas/Desktop/codes/alfred/models/eval/eval_task.py", line 21, in run
    env = ThorEnv()
  File "/home/michas/Desktop/codes/alfred/env/thor_env.py", line 29, in __init__
    super().__init__(quality=quality)
  File "/home/michas/anaconda3/envs/pytorch/lib/python3.7/site-packages/ai2thor/controller.py", line 452, in __init__
    event = self.reset(scene)
  File "/home/michas/Desktop/codes/alfred/env/thor_env.py", line 61, in reset
    super().reset(scene_name)
  File "/home/michas/anaconda3/envs/pytorch/lib/python3.7/site-packages/ai2thor/controller.py", line 484, in reset
    if scene not in self.scenes_in_build:
  File "/home/michas/anaconda3/envs/pytorch/lib/python3.7/site-packages/ai2thor/controller.py", line 474, in scenes_in_build
    event = self.step(action='GetScenesInBuild')
  File "/home/michas/Desktop/codes/alfred/env/thor_env.py", line 139, in step
    if "LookUp" in action['action']:
TypeError: string indices must be integers

Position and Rotation for objects - requesting clarity as a newbie

Why are there multiple position and rotation values for each object?

For example knife_8aa9254c in the dataset https://github.com/askforalfred/alfred/blob/master/data/json_2.1.0/train/pick_and_place_simple-Spoon-None-SinkBasin-12/trial_T20190907_035203_361260/traj_data.json

has three position (x,y,z) and rotation values. Which of these three is the initial position of the object? And do the position for the agent and objects use the same (0,0,0)? what is (0,0,0)?

Thank you for your help!
-Adi

code logic error

in resnet.py

starting from line 62

        if self.model_type == "maskrcnn":
            self.resnet_model = MaskRCNN(args, eval, share_memory, use_conv_feat)
        else:
            self.resnet_model = Resnet18(args, eval, share_memory)

this does not make sense.
if you use "maskrcnn" there should be no parameter "use_conv_feat"

whereas the flow of the code does not let the use of maskrcnn at evaluation stage.
this code is quite confusing

Unable to find X server when running evaluation

Hello, I'm trying to evaluate the pre-trained model weights (which I have downloaded) using the steps on the readme within the models directory. The cloned repo resides on an AWS EC2 instance with Ubuntu 18.04 and Nvidia GPU. I have tried running the evaluation code via XQuartz terminal with X11 forwarding, and via the local terminal in an RDP (enabled through xRDP). After activating my conda environment which contains the dependencies and cd-ing to $ALFRED_ROOT (i.e., top-level directory of repo), my command is as follows:

python models/eval/eval_seq2seq.py --model_path models/pretrained/model:seq2seq_im_mask,name:base30_pm010_sg010_01/best_seen.pth --eval_split valid_seen --data data/json_feat_2.1.0 --model models.model.seq2seq_im_mask --gpu --num_threads 4

In both setups, when running the command, I'm getting the following error (one for each thread I'm trying to start, and also in single-thread case):

No protocol specified xdpyinfo: unable to open display ":0". Process Process-5: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/alfredPy/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/alfredPy/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/ubuntu/alfred/models/eval/eval_task.py", line 20, in run env = ThorEnv() File "/home/ubuntu/alfred/env/thor_env.py", line 34, in __init__ player_screen_width=player_screen_width) File "/home/ubuntu/anaconda3/envs/alfredPy/lib/python3.6/site-packages/ai2thor/controller.py", line 855, in start self.check_x_display(env['DISPLAY']) File "/home/ubuntu/anaconda3/envs/alfredPy/lib/python3.6/site-packages/ai2thor/controller.py", line 715, in check_x_display ("Invalid DISPLAY %s - cannot find X server with xdpyinfo" % x_display) AssertionError: Invalid DISPLAY :0 - cannot find X server with xdpyinfo

I'm a novice in this area, but it seems (in my limited understanding) that the Thor component of this code can't access the X server to do some GUI rendering, e.g., of the simulated images during evaluation. However, in both setups, I can verify that the $DISPLAY variable is populated with something reasonable, e.g., "localhost:10.0" on XQuartz, and ":13.0" on RDP. I can also successfully run "xclock" and "xeyes" in both setups.

FYI, I am able to successfully begin training a new model from scratch, so it seems the dependencies, etc. are set up correctly.

I've been struggling with this problem for about a day now and haven't found much useful information online for it, so any information or insight is appreciated! Thank you!

Default parameters for training and log files

Supervision. We train all models using the teacher-forcing paradigm on the expert trajectories, and this ensures the language directives match the visual inputs.

The paper has this bit about teacher forcing, but the training command in the Readme does not use the --dec_teacher_forcing flag. Is this needed to train the model?

Also would it be possible to share the log files (stdout and tensorboard logs) for the training run that resulted in the pretrained checkpoint?

Question on PLW and PC

I want the nomenclature to be clear.

so PLW (which is an acronym for path_len_weight in code?) that is given when running eval_seq2seq.py is equal to "Path Weighted Metrics" in the paper?

and I'm not quite sure what PC means (in the code it says postcondition but I don't see this word in the paper)

Anyone mind clarifying these terms out?

Ground truth predictions don't yield 100% success rate

Hi,

I recently find that feeding ground truth actions and masks does not yield a 100% success rate on valid_seen.

Over 817 valid_seen samples (first 3 removed for personal reason), the result is:
SR: 674/817 = 0.825
PC: 1946/2097 = 0.928

Any thoughts about the possible cause?

Visual Angle

Hi.
At the beginning, is the visual angle of agent horizontal?
And does the environment provide an API to get some information about visual angle and coordinates?

Does a predicted mask have an impact on predicting an action?

Hi, I'm finding out how masks affect action prediction.

I couldn't find equations in the paper that show the relevance between actions and masks.
According to the paper, the next action is determined by visual and linguistic features, a previous action, a previous hidden state, and learnable parameters.
But no mask is used for the action prediction.

So I'm now following your codes to find out how the masks affect the predictions of actions, but I can't find it.

If I misunderstand the paper, could you help me understand how a mask affects the prediction of an action and what a mask is used for?

Thanks for replying!

Can't get 100% accuracy in Sub-Goal evaluation with ground-truth actions and masks.

I'm trying to produce Sub-Goal evaluation results with ground-truth actions and masks.
But I got index-out-of-bound errors and couldn't get 100% PLW for some trajectories in seen and unseen validation sets. (both SR and PLW should be 100% as it's evaluated with ground truths.)

This is the changes I made only in eval_subgoals.py, (line 69 and 128)

...

 68: expert_init_actions = [a['discrete_action'] for a in traj_data['plan']['low_actions'] if a['high_idx'] < eval_idx]
 69: expert_init_actions_gt = [a['discrete_action'] for a in traj_data['plan']['low_actions']]

...

127: mask = np.squeeze(mask, axis=0) if model.has_interaction(action) else None
128: action = expert_init_actions[t]['action']
     compressed_mask = expert_init_actions_gt['args']['mask'] if 'mask' in expert_init_actions_gt['args'] else None
     mask = env.decompress_mask(compressed_mask) if compressed_mask is not None else None
129: # debug
130:     if args.debug:

...

If the changes are correct to implement ground-truth actions and masks, is there any idea why I can't get 100% PLW?
And I don't understand why I got index-out-of-bound errors with ground-truth trajectories.

Thanks for replying!

Get stuck when eval

When I eval the checkpoint, the process seems to be stuck during the initialization of the simulation environment.

Loading: exp/model:seq2seq_im_mask,name:pm_and_subgoals_01/best_unseen.pth
Found path: /home/ubuntu/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64
Mono path[0] = '/home/ubuntu/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Managed'
Mono config path = '/home/ubuntu/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Mono/etc'
Preloaded 'ScreenSelector.so'
Logging to /home/ubuntu/.config/unity3d/Allen Institute for Artificial ### Intelligence/AI2-Thor/Player.log

Is there something wrong with the ai2thor?
Also, Is UI required?
Thanks.

Can't validate a model in Test dataset.

First of all, thank you for the awesome code.

I'm trying to validate a model after traininig.
Validation dataset for both seen and unseen works great, but Test dataset returns errors for both seen and unseen cases.
Here's the error message.

tests_seen': 1533,
'tests_unseen': 1529,
'train': 21023,
'valid_seen': 820,
'valid_unseen': 821}
...
ThorEnv started.
Evaluating: data/json_feat_2.1.0/trial_T20190909_042500_949430
No. of trajectories left: 1532
Resetting ThorEnv
Task: Retrieve the sponge from the kitchen island, place in fry pan, put fry pan on kitchen island.
Traceback (most recent call last):
File "/home/user/Desktop/alfred/models/eval/eval_task.py", line 34, in run
cls.evaluate(env, model, r_idx, resnet, traj, args, lock, successes, failures, results)
File "/home/user/Desktop/alfred/models/eval/eval_task.py", line 51, in evaluate
cls.setup_scene(env, traj_data, r_idx, args, reward_type=reward_type)
File "/home/user/Desktop/alfred/models/eval/eval.py", line 120, in setup_scene
env.set_task(traj_data, args, reward_type=reward_type)
File "/home/user/Desktop/alfred/env/thor_env.py", line 122, in set_task
task_type = traj['task_type']
KeyError: 'task_type'
Error: KeyError('task_type',)
...

And redownloading json_feat_2.1.0 didn't help.

How can I solve it?

Again, thanks for sharing the great code.

Should delete the first argument

Should delete the first argument task_type_ind here

def get_task_str(task_type_ind, object_ind, receptacle_ind=None, toggle_ind=None, mrecep_ind=None):

Otherwise, it causes the inputs to be incorrectly shifted here

return game_util.get_task_str(self.object_target, self.parent_target, self.toggle_target, self.mrecep_target)

Which will cause an error if the task type is to examine in light

self.parent_target = None

Error is raised here because None type cannot convert to int
obj = constants.OBJECTS[object_ind].lower()

Evaluation Error, "No Protocol Specified"

Hello, my environment is

  • Ubuntu 20.04
  • GTX 1060

I downloaded the pre-train model and to evaluate it I set up a docker environment.
When I run the following command after I run script/run_docker.py, Error occurred.

(alfred_env) yuki@yuki-lab:~/alfred$ python models/eval/eval_seq2seq.py --model_path baseline/best_seen.pth --eval_split valid_seen --data data/json_feat_2.1.0 --model models.model.seq2seq_im_mask --gpu --num_threads 3
{'tests_seen': 1533,
 'tests_unseen': 1529,
 'train': 21023,
 'valid_seen': 820,
 'valid_unseen': 821}
Loading:  baseline/best_seen.pth
Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /home/yuki/.cache/torch/checkpoints/resnet18-5c106cde.pth
100%|######################################################################################################################################################| 46827520/46827520 [00:04<00:00, 11243319.04it/s]
thor-201909061227-Linux64: [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 100%   4.3 MiB/s]  of 390.MB
thor-201909061227-Linux64: [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||                                               70%   3.0 MiB/s]  of 390.MBNo protocol specified
Found path: /home/yuki/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64
Mono path[0] = '/home/yuki/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Managed'
Mono config path = '/home/yuki/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Mono/etc'
thor-201909061227-Linux64: [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||                                          72%   2.9 MiB/s]  of 390.MBPreloaded 'ScreenSelector.so'
PlayerPrefs - Creating folder: /home/yuki/.config/unity3d/Allen Institute for Artificial Intelligence
PlayerPrefs - Creating folder: /home/yuki/.config/unity3d/Allen Institute for Artificial Intelligence/AI2-Thor
Logging to /home/yuki/.config/unity3d/Allen Institute for Artificial Intelligence/AI2-Thor/Player.log
No protocol specified
thor-201909061227-Linux64: [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 100%   3.4 MiB/s]  of 390.MB
thor-201909061227-Linux64: [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 100%   3.3 MiB/s]  of 390.MB
Process Process-4:
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/yuki/alfred/models/eval/eval_task.py", line 20, in run
    env = ThorEnv()
  File "/home/yuki/alfred/env/thor_env.py", line 33, in __init__
    player_screen_width=player_screen_width)
  File "/home/yuki/alfred_env/lib/python3.5/site-packages/ai2thor/controller.py", line 858, in start
    self.download_binary()
  File "/home/yuki/alfred_env/lib/python3.5/site-packages/ai2thor/controller.py", line 796, in download_binary
    os.rename(extract_dir, os.path.join(self.releases_dir(), self.build_name()))
OSError: [Errno 39] Directory not empty: '/home/yuki/.ai2thor/tmp/thor-201909061227-Linux64' -> '/home/yuki/.ai2thor/releases/thor-201909061227-Linux64'
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/yuki/alfred/models/eval/eval_task.py", line 20, in run
    env = ThorEnv()
  File "/home/yuki/alfred/env/thor_env.py", line 33, in __init__
    player_screen_width=player_screen_width)
  File "/home/yuki/alfred_env/lib/python3.5/site-packages/ai2thor/controller.py", line 858, in start
    self.download_binary()
  File "/home/yuki/alfred_env/lib/python3.5/site-packages/ai2thor/controller.py", line 796, in download_binary
    os.rename(extract_dir, os.path.join(self.releases_dir(), self.build_name()))
OSError: [Errno 39] Directory not empty: '/home/yuki/.ai2thor/tmp/thor-201909061227-Linux64' -> '/home/yuki/.ai2thor/releases/thor-201909061227-Linux64'

And I set num_threads 1, then eval_seq2seq.py print "No Protocol Specified".

yuki@yuki-lab:~/alfred$ python models/eval/eval_seq2seq.py --model_path <model_path>/best_seen.pth --eval_split valid_seen --data data/json_feat_2.1.0 --model models.model.seq2seq_im_mask --gpu --num_threads 3 --subgoals all^C
yuki@yuki-lab:~/alfred$ python models/eval/eval_seq2seq.py --model_path baseline/best_seen.pth --eval_split valid_seen --data data/json_feat_2.1.0 --model models.model.seq2seq_im_mask --gpu --num_threads 1 
{'tests_seen': 1533,
 'tests_unseen': 1529,
 'train': 21023,
 'valid_seen': 820,
 'valid_unseen': 821}
Loading:  baseline/best_seen.pth
Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /home/yuki/.cache/torch/checkpoints/resnet18-5c106cde.pth
100%|######################################################################################################################################################| 46827520/46827520 [00:04<00:00, 11261491.30it/s]
thor-201909061227-Linux64: [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 100%   7.6 MiB/s]  of 390.MB
No protocol specified
Found path: /home/yuki/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64
Mono path[0] = '/home/yuki/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Managed'
Mono config path = '/home/yuki/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Mono/etc'
Preloaded 'ScreenSelector.so'
PlayerPrefs - Creating folder: /home/yuki/.config/unity3d/Allen Institute for Artificial Intelligence
PlayerPrefs - Creating folder: /home/yuki/.config/unity3d/Allen Institute for Artificial Intelligence/AI2-Thor
Logging to /home/yuki/.config/unity3d/Allen Institute for Artificial Intelligence/AI2-Thor/Player.log
No protocol specified

Is this message correct?
And can I deal with the first problem?

Thank you.

Leaderboard submission

Hello,

I am assuming the file which should be submitted is tests_actseqs_dump_{datetime}.json. When I submit the file to the leaderboard, the test status is succeeded but the numbers do not show up. May I ask if this is because my .json file is in the wrong format?

Thanks so much!

Best,
Muqiao
Screen Shot 2020-07-21 at 11 56 49 PM

ai2thor error in evaluaiton

Run this command on a trained model from a docker image-

python3 models/eval/eval_seq2seq.py --model_path model_seq2seq_im_mask_name_pm_and_subgoals_01/best_seen.pth --eval_split valid_seen --data data/json_feat_2.1.0 --model models.model.seq2seq_im_mask --gpu --num_threads 3

Get this log-

Found path: /root/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64
Mono path[0] = '/root/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Managed'
Mono config path = '/root/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64_Data/Mono/etc'
Unable to preload the following plugins:
	ScreenSelector.so
Display 0 '0': 5760x1251 (primary device).
Logging to /root/.config/unity3d/Allen Institute for Artificial Intelligence/AI2-Thor/Player.log
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.6/dist-packages/ai2thor/controller.py", line 697, in _start_unity_thread
    raise Exception("command: %s exited with %s" % (command, returncode))
Exception: command: ['/root/.ai2thor/releases/thor-201909061227-Linux64/thor-201909061227-Linux64', '-screen-fullscreen', '0', '-screen-quality', '4', '-screen-width', '300', '-screen-height', '300'] exited with 1

Do I need to set some env for running from docker?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.