Git Product home page Git Product logo

l5kit's People

Contributors

aalavian avatar chih-hu avatar danielhavir avatar eeshakumar avatar epraun avatar jimexist avatar joseluisvaz avatar laky avatar louis925 avatar lucabergamini avatar melights avatar moridaiki avatar oliver-scheel avatar pascal-pfeiffer avatar perone avatar piantic avatar pondruska avatar stefaniespeichert avatar stefanopini avatar thedebugger811 avatar witignite avatar yawei-ye avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

l5kit's Issues

Average and final displacement error metrics

In metrics.py, avails is used to mask out the unvalid errors but then we take the mean over the timesteps

error = np.sum(((ground_truth - pred) * avails) ** 2, axis=-1)  # reduce coords and use availability
error = error ** 0.5  # calculate root of error (= L2 norm)
error = np.mean(error, axis=-1)  # average over timesteps 

Wouldn't it be a better estimate if we used avails to average only wrt the valid timesteps?

I am trying to create some metrics for multi-agent predictions in the cleanest way possible, therefore I am trying to ignore where avail == 0

About raw data

Thanks for your contribution for this big dataset !
I try your visualise_data.ipynb like this :

frames = zarr_dataset.frames
scenes = zarr_dataset.scenes
agents = zarr_dataset.agents
print('agents: {}'.format(agents))
print('scenes: {}'.format(scenes))
print('frames: {}'.format(frames))

I get the result

agents: <zarr.core.Array '/agents' (1800562,) [('centroid', '<f8', (2,)), ('extent', '<f4', (3,)), ('yaw', '<f4'), ('velocity', '<f4', (2,)), ('track_id', '<u8'), ('label_probabilities', '<f4', (17,))] read-only>
scenes: <zarr.core.Array '/scenes' (96,) [('frame_index_interval', '<i8', (2,)), ('host', '<U16'), ('start_time', '<i8'), ('end_time', '<i8')] read-only>
frames: <zarr.core.Array '/frames' (23846,) [('timestamp', '<i8'), ('agent_index_interval', '<i8', (2,)), ('ego_translation', '<f8', (3,)), ('ego_rotation', '<f8', (3, 3))] read-only>

I think this result has already been processed.
I want to get the raw camera and lidar dataset, how can I get it ?
Thank you very much !

xy_left and xy_right explanation in Semantic Rasterizer.

HI! Could You please the meaning and difference between xy_left and xy_right points in Semantic Rasterizer. When I try to rasterize only xy_lefts or xy_rights, there's a difference in visualization, but I don't see logical difference.

only xy_left
left

only xy_right
right

both
both

[ERROR] The example script for agent_motion_prediction possibly has issues

Hi,

I tried to run the example scripts. While visualise_data worked well, the agent_motion_prediction first has strange eval_dataloader as follows:
`dm = LocalDataManager(None)

===== INIT DATASETS

rasterizer = build_rasterizer(cfg, dm)
train_dataloader = build_dataloader(cfg, "train", dm, AgentDataset, rasterizer)
eval_dataloader = build_dataloader(cfg, "val", dm, AgentDataset, rasterizer)`

This yielded the size of eval_dataloader is zero which led to error at the evaluation (training was fine).

I tried change the scene_indices for val_data_loader to 1 in the file agent_motion_config and got non-zero eval_dataloader and the result below

+-------------+--------+
| future step |  MSE   |
+-------------+--------+
|      1      |  2.30  |
|      2      |  7.10  |
|      3      | 54.06  |
|      4      | 106.99 |
|      5      | 134.82 |
|      6      | 205.89 |
|      7      | 181.60 |
|      8      | 192.88 |
|      9      | 198.58 |
|      10     | 225.66 |
...

However, the Visualise Results doesn't work.
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in
28
29 # convert coordinates to AV point-of-view so we can draw them
---> 30 predicted_positions = transform_points(np.concatenate(predicted_positions), data_ego["world_to_image"])
31 target_positions = transform_points(np.concatenate(target_positions), data_ego["world_to_image"])
32

<array_function internals> in concatenate(*args, **kwargs)

ValueError: need at least one array to concatenate`

Best.

Validation data

Hi

  • I have been able to download the training set. Where can I find validation and test sets?
  • Are you planning to release the weights of the model that has been trained on entire dataset and the results is published in the paper?
  • Is there any evaluation server or leaderboard yet?

Thanks

AgentDataset and EgoDataset mapping

Hi,

This is more of helping me out question rather than issue. This is related to this kaggle discussion.

Idea is to find the rationale behind raster size selection. It was suggested in the discussion to use the agent's velocity to determine the raster size required. If instead, we can use the agent's target position directly, it'd be more concrete. Since the raster image has the ego vehicle fixed at [0.25, 0.5] of the image, I thought of transforming the agent's target position trajectory to ego vehicle coordinate system.

But for doing this transformation, we need the ego vehicle coordinate and yaw angle in the world coordinates (present in centroid and yaw keys of EgoDataset) and world coordinates of the agent target position (in AgentDataset). Is there any mapping already available (we can iterate over timestamps and ids, still is there a easier solution?) which maps instances of AgentDataset map to those in EgoDataset? Like first 50 in AgentDataset are belong to 1st instance in EgoDataset. Or I am missing something important?

RAM memory leaking

This is my training loop and my RAM utilization keeps growing during the run:

losses_train = []
tr_it = iter(train_dataloader)
progress_bar = tqdm(range(cfg["train_params"]["max_num_steps"]))
st = time.time()

torch.set_grad_enabled(True)

for k in progress_bar:
    
    data_cpu = next(tr_it)
    data = {v:k.to(device) for v,k in data_cpu.items()}
    for v,k in data_cpu.items():
        del k
    del data_cpu
    
    model.train()
    loss, _ = forward(data, model, device, criterion)

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    
    loss_cpu = loss.detach().item()

    losses_train.append(loss_cpu)
    progress_bar.set_description("loss: % 3.2f loss(avg): % 3.2f"%(loss_cpu,np.mean(losses_train)))
    
    del data, loss, loss_cpu
    
print("running time", time.time() - st)

Running this cell again initially releases the memory, but then it is growing again. The memory is released specifically on call
tr_it = iter(train_dataloader). Running on Ubuntu 20.04, jupyter notebook, config is the following:

{'key': 'scenes/train.zarr',
 'batch_size': 16,
 'shuffle': True,
 'num_workers': 16}

Thank you in advance!

l5kit getting stuck during load_agents_mask

i'm trying to create AgentDataset, but seem's like it's getting stuck here

AgentDataset -> load_agents_mask -> convenience.load

image - this seem's to be script with AgentDataset

what can i do with this?

Functionality to allow Ego/Agent-Dataset to not have a rasterizer

Would it be possible to add the functionality to not use a rasterizer in EgoDataset? This could speed up the data loading and given that some solutions will only use data["history_positions"] sometimes generating the images just adds computational overhead.

Duplicate Tack Id in the same frame

When I processing the sample dataset found that there are duplicate track Ids in the same frame, and the attribute value of them are not same.

array([([  671.78094482, -2196.83374023], [0.6011633 , 1.2727729 , 0.38642073], 0.99196154, [4.2793603, 7.436639 ], 1, [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
       ([  672.21514893, -2196.10375977], [0.6011633 , 1.2727729 , 0.38642073], 0.9913136 , [4.2783957, 7.4371986], 1, [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])],
      dtype=[('centroid', '<f8', (2,)), ('extent', '<f4', (3,)), ('yaw', '<f4'), ('velocity', '<f4', (2,)), ('track_id', '<u8'), ('label_probabilities', '<f4', (17,))])

timestamp is 1572643685901838786

Error when using chopped dataset function

Hi,

I'm trying to reproduce the model validation part in l5kit reference notebook locally. I have extracted the data (kaggle data ~22GB) locally and using a conda environment (Ubuntu 16.04, Python 3.6, l5kit version 1.1.0) to run validation. But when using the create_chopped_dataset function, I get the following error.

l5kit_zarr_error

Any help would be most welcome.

Data_format for Agents

Like you say in data_format.md

AGENT_DTYPE = [
    ("centroid", np.float64, (2,)),
    ("extent", np.float32, (3,)),
    ("yaw", np.float32),
    ("velocity", np.float32, (2,)),
    ("track_id", np.uint64),
    ("label_probabilities", np.float32, (len(LABELS),)),
]

What's the meaning of extent ?
I guess it's the agent's height, width and length ?
But I extract the data in secen[0] :

frame_id timestamp agent_x agent_y agent_ex1 agent_ex2 agent_ex3 agent_yaw agent_velx agent_vely agent_id agent_category
0 1.26E+18 665.0342 -2207.51 4.391328812 1.813830376 1.590975761 1.016675115 0 0 1 CAR
1 1.26E+18 666.9965 -2204.96 0.586505592 1.266370177 0.380691528 0.986888647 0 0 1 CAR
2 1.26E+18 667.0374 -2204.88 0.58669126 1.267707825 0.38143602 1.007894516 0.235175833 0.37267229 1 CAR
3 1.26E+18 667.2947 -2204.42 0.586929798 1.269182086 0.381702363 0.999890804 1.01263988 1.74916482 1 CAR
4 1.26E+18 667.9021 -2203.44 0.589754522 1.271379709 0.383637488 0.996144176 2.294893503 3.804159403 1 CAR
5 1.26E+18 668.7579 -2202.07 0.600590885 1.271674395 0.385867119 0.993598878 3.528813839 5.801221848 1 CAR
6 1.26E+18 669.5693 -2200.51 0.601163328 1.272772908 0.386420727 0.99629432 4.276067257 7.43839407 1 CAR
7 1.26E+18 669.9974 -2199.76 0.601163328 1.272772908 0.386420727 0.995513678 4.276094437 7.438437939 1 CAR
8 1.26E+18 670.4583 -2199.02 0.601163328 1.272772908 0.386420727 0.994767904 4.275854111 7.438612461 1 CAR
9 1.26E+18 670.9053 -2198.28 0.601163328 1.272772908 0.386420727 0.993965983 4.276028156 7.43853426 1 CAR
10 1.26E+18 671.3424 -2197.54 0.601163328 1.272772908 0.386420727 0.993202209 4.275919914 7.4386096 1 CAR
11 1.26E+18 671.7809 -2196.83 0.601163328 1.272772908 0.386420727 0.991961718 4.279360294 7.436638832 1 CAR
12 1.26E+18 672.2151 -2196.1 0.601163328 1.272772908 0.386420727 0.991313756 4.278395653 7.437198639 1 CAR
13 1.26E+18 672.6445 -2195.36 0.601163328 1.272772908 0.386420727 0.990654171 4.277519703 7.437705994 1 CAR
14 1.26E+18 673.081 -2194.63 0.601163328 1.272772908 0.386420727 0.989976883 4.276776791 7.43813467 1 CAR
15 1.26E+18 673.4843 -2193.88 0.601163328 1.272772908 0.386420727 0.989197731 4.276790619 7.438127995 1 CAR
16 1.26E+18 673.9106 -2193.15 0.601163328 1.272772908 0.386420727 0.988506138 4.27615118 7.43849659 1 CAR

Parameters agent_ex1, agent_ex2 and agent_ex3 for extent[0], extent[1] and extent[2] vary over time .
I feel confused .
And where can I find the definition of coord system ? Does 'centroid[0]' means x value and which axis is x axis ?
I'm looking forward for your reply !
Thank you very much !

AttributeError: type object 'Callable' has no attribute '_abc_registry' while installation

Hi!

I encountered the following error while trying pip install l5kit as well as installing as a developer.

ERROR: Command errored out with exit status 1:
   command: /home/tonychenxyz/anaconda3/envs/torch1.6/bin/python /home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-vgrf1i5t/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple --find-links https://download.pytorch.org/whl/torch_stable.html -- setuptools wheel
       cwd: None
  Complete output (42 lines):
  Traceback (most recent call last):
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/runpy.py", line 193, in _run_module_as_main
      "__main__", mod_spec)
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/runpy.py", line 85, in _run_code
      exec(code, run_globals)
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/__main__.py", line 26, in <module>
      sys.exit(_main())
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_internal/cli/main.py", line 73, in main
      command = create_command(cmd_name, isolated=("--isolated" in cmd_args))
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_internal/commands/__init__.py", line 104, in create_command
      module = importlib.import_module(module_path)
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/importlib/__init__.py", line 127, in import_module
      return _bootstrap._gcd_import(name[level:], package, level)
    File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
    File "<frozen importlib._bootstrap>", line 983, in _find_and_load
    File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
    File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
    File "<frozen importlib._bootstrap_external>", line 728, in exec_module
    File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 17, in <module>
      from pip._internal.cli.req_command import RequirementCommand, with_cleanup
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_internal/cli/req_command.py", line 16, in <module>
      from pip._internal.index.collector import LinkCollector
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_internal/index/collector.py", line 14, in <module>
      from pip._vendor import html5lib, requests
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_vendor/requests/__init__.py", line 125, in <module>
      from . import utils
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_vendor/requests/utils.py", line 25, in <module>
      from . import certs
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_vendor/requests/certs.py", line 15, in <module>
      from pip._vendor.certifi import where
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_vendor/certifi/__init__.py", line 1, in <module>
      from .core import contents, where
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip/_vendor/certifi/core.py", line 12, in <module>
      from importlib.resources import path as get_path, read_text
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/importlib/resources.py", line 11, in <module>
      from typing import Iterable, Iterator, Optional, Set, Union   # noqa: F401
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/typing.py", line 1357, in <module>
      class Callable(extra=collections_abc.Callable, metaclass=CallableMeta):
    File "/home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/typing.py", line 1005, in __new__
      self._abc_registry = extra._abc_registry
  AttributeError: type object 'Callable' has no attribute '_abc_registry'
  ----------------------------------------
ERROR: Command errored out with exit status 1: /home/tonychenxyz/anaconda3/envs/torch1.6/bin/python /home/tonychenxyz/anaconda3/envs/torch1.6/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-vgrf1i5t/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple --find-links https://download.pytorch.org/whl/torch_stable.html -- setuptools wheel Check the logs for full command output.

Error with num_workers>0 in DistributedDataParallel mode

I got an error when trying to train model with DistributedDataParallel and set num_workers>0 in DataLoader. With num_workers=0 everything works fine. My code:

from l5kit.dataset import AgentDataset
from l5kit.data import ChunkedDataset, LocalDataManager
from l5kit.rasterization import build_rasterizer

dm = LocalDataManager(data_dir)
cfg = lyft_config['train_data_loader'] if istrain else lyft_config['val_data_loader']
rasterizer = build_rasterizer(lyft_config, dm)
train_zarr = ChunkedDataset(dm.require(cfg["key"])).open()
dataset = AgentDataset(lyft_config, train_zarr, rasterizer)

sampler = DistributedSampler(dataset, rank=rank, num_replicas=world_size, shuffle=True)

data_loader = DataLoader(dataset=dataset, 
                             batch_size=batch_size, 
                             sampler=sampler, 
                             pin_memory=pin_memory,
                             drop_last=drop_last,
                             num_workers=num_workers)

The error I get:

    for batch_index, data in enumerate(tqdm_train):
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/site-packages/tqdm/std.py", line 1165, in __iter__
    for obj in iterable:
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 291, in __iter__
    return _MultiProcessingDataLoaderIter(self)
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 737, in __init__
    w.start()
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/root/workdir/anaconda3/envs/lyft/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'google.protobuf.pyext._message.RepeatedCompositeContainer' object```

On Windows 10, ValueError: cannot find context for 'fork'

When I tried to run visualise_data.ipynb on Windows, I received the following error:

D:\l5kit\l5kit\l5kit\dataset\select_agents.py in <module>
     20 from l5kit.geometry import angular_distance
     21 
---> 22 multiprocessing.set_start_method("fork", force=True)  # this fix loop in python 3.8 on MacOS
     23 os.environ["BLOSC_NOLOCK"] = "1"  # this is required for multiprocessing
     24 

~\Anaconda3\envs\py37\lib\multiprocessing\context.py in set_start_method(self, method, force)
    244             self._actual_context = None
    245             return
--> 246         self._actual_context = self.get_context(method)
    247 
    248     def get_start_method(self, allow_none=False):

~\Anaconda3\envs\py37\lib\multiprocessing\context.py in get_context(self, method)
    236             return self._actual_context
    237         else:
--> 238             return super().get_context(method)
    239 
    240     def set_start_method(self, method, force=False):

~\Anaconda3\envs\py37\lib\multiprocessing\context.py in get_context(self, method)
    190             ctx = _concrete_contexts[method]
    191         except KeyError:
--> 192             raise ValueError('cannot find context for %r' % method) from None
    193         ctx._check_available()
    194         return ctx

ValueError: cannot find context for 'fork'

Are there any plans to release lane centerlines as part of the semantic map?

I've been playing around with the Lyft prediction dataset for the past couple days and have been really impressed with the volume of data and slick implementation of scene rasterization in l5kit. However, I think the dataset could benefit from alternative forms of information about the semantic map, such as lane centerlines and connectivity.

While the current dataset does include lane boundaries, it is missing lane centerlines - an important piece of the puzzle that is provided in both the Argoverse and NuScenes prediction datasets. In the absence of such information, it becomes more difficult to directly apply vector and graph-based forecasting models for comparison across datasets.

Are there currently any plans to release lane centerlines or connectivity graphs in future releases of the dataset?

Thanks!

Use GPU for rasterization

During training, we find GPU utilization is very low but CPU utilization is 100% as most of the computing time is taken to load the data and prepare the semantic maps. I ran a profiler on the generate_agent_sample function in agent_sampling.py, and it seems around 55% of the time is taken up by rasterizations, particular the transformations. (The rest of the time is taken to read the zarr data.)

Would it be possible to have the rasterizers use the GPU by optionally passing the device like device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")? I've not worked with rasterization before and not deeply familiar with the l5kit yet, so I don't know the effort it would take to refactor all that rasterization code. But it would also help if one of the devs could point out how one could go about this.

ValueError: cannot find context for 'fork'

OS: Windows 8.1
Python version: 3.7.5

Description: I guess it is due to multiprocessing. I am not able to run this on my local computer.

Python 3.7.5 (tags/v3.7.5:5c02a39a0b, Oct 15 2019, 00:11:34) [MSC v.1916 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
from l5kit.dataset import AgentDataset, EgoDataset
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Prasad\AppData\Local\Programs\Python\Python37\lib\site-packages
\l5kit\dataset_init_.py", line 1, in
from .agent import AgentDataset
File "C:\Users\Prasad\AppData\Local\Programs\Python\Python37\lib\site-packages
\l5kit\dataset\agent.py", line 12, in
from .select_agents import TH_DISTANCE_AV, TH_EXTENT_RATIO, TH_YAW_DEGREE, s
elect_agents
File "C:\Users\Prasad\AppData\Local\Programs\Python\Python37\lib\site-packages
\l5kit\dataset\select_agents.py", line 21, in
multiprocessing.set_start_method("fork", force=True) # this fix loop in pyt
hon 3.8 on MacOS
File "C:\Users\Prasad\AppData\Local\Programs\Python\Python37\lib\multiprocessi
ng\context.py", line 246, in set_start_method
self._actual_context = self.get_context(method)
File "C:\Users\Prasad\AppData\Local\Programs\Python\Python37\lib\multiprocessi
ng\context.py", line 238, in get_context
return super().get_context(method)
File "C:\Users\Prasad\AppData\Local\Programs\Python\Python37\lib\multiprocessi
ng\context.py", line 192, in get_context
raise ValueError('cannot find context for %r' % method) from None
ValueError: cannot find context for 'fork'

world_to_image error?

I understood world_to_image below maps writer reference frame from world to image.

However, the rotation matrix value goes beyond -1~1? Isn't this rotation matrix???

Ex. below is the output for dataset[0]["world_to_image"] and most of the elements for 2D rotational matrix(dataset[:2,:2]'s element has larger than 1 or smaller than -1. As far as I know it should be between -1~1. Am I missing something?
array([[ 1.09409782e+00, 1.67420129e+00, 2.99894782e+03],
[-1.67420129e+00, 1.09409782e+00, 3.64024753e+03],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])

Different rasterizations?

In the paper, it is mentioned that one is able to use the software kit to rasterize the image in different ways as attached below. I could rasterize in the format of the image on the right, but I have not been able to do so with the style on the left. Would appreciate if someone could elaborate how I could do that?
Screenshot from 2020-11-10 13-05-58

Installation error

Could you please look at this issue
pip3 install l5kit
Collecting l5kit
Using cached l5kit-1.1.0-py3-none-any.whl (82 kB)
Requirement already satisfied: protobuf>=3.12.2 in c:\users\xxxxx\miniconda3\lib\site-packages (from l5kit) (3.13.0)
Requirement already satisfied: tqdm in c:\users\xxxxx\miniconda3\lib\site-packages (from l5kit) (4.48.2)
Collecting transforms3d
Downloading transforms3d-0.3.1.tar.gz (62 kB)
|████████████████████████████████| 62 kB 6.4 kB/s
Requirement already satisfied: pyyaml in c:\users\xxxx\miniconda3\lib\site-packages (from l5kit) (5.3.1)
Collecting zarr
Downloading zarr-2.5.0-py3-none-any.whl (131 kB)
|████████████████████████████████| 131 kB 3.3 MB/s
Requirement already satisfied: notebook in c:\users\xxxxx\miniconda3\lib\site-packages (from l5kit) (6.1.1)
Requirement already satisfied: setuptools in c:\users\xxxxxx\miniconda3\lib\site-packages (from l5kit) (49.6.0.post20200814)
ERROR: Could not find a version that satisfies the requirement torchvision<1.0.0,>=0.6.0 (from l5kit) (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.3.0, 0.4.1, 0.5.0)
ERROR: No matching distribution found for torchvision<1.0.0,>=0.6.0 (from l5kit)

ValueError: group not found at path '' Error

Dear Team,

As you have described in the 'Download the datasets' section, I have kept the aerial,semantic maps and scenes sub folders and json file.

I am getting following error , when I ran the file visualise_data.ipynb.
** Error code section :**
dm = LocalDataManager()
dataset_path = dm.require(cfg["val_data_loader"]["key"])
zarr_dataset = ChunkedDataset(dataset_path)
zarr_dataset.open()
print(zarr_dataset)

"Error message"
image

Could you please provide the solution for this issue..

Thanks and regards,
Rajesh Paul

regarding agent_sampling

Team, when I am debugging l5kit, I found something but I am not sure it's functionally correct or not. Kindly ignore if it won't make sense

At the time of agent sampling, I can see that you are slicing history and future frames based on the number_of_frames (interested) and step_size (-ve value for history and +ve value for future).
Ex:

list_of_frame_index_of_scene = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
interested_current_state_index = 4
step_size = 2
number_of_frames_interested_states = 2
#then
history_and_future_frames_of_current_state_index = [0, 2, 4, 6, 8]

At the time of getting agents by using agent_intervals which are available in frames, You are sequentially parsing by looking at the starting index available at the first frame and the ending index at the last frame in the given frame_array(s).

But this may not true for frame_array which I made based on step_size right? because some frames(state) are missing between.
or You are assuming that agents available at the first frame and last frame in a given array can be the same?

We had a discussion here already. Please refer.

Request For More Detailed Rasterized Dataset Documentation

Couple of questions/details about the datasets returned by EgoDataset() and AgentDataset() that I wasn't able to find on the l5kit documentation or lyft level5 prediction website:

  • Units: What units are target_positions, centroid, extent, target_yaws and yaw in? I'm guessing meters and radians respectively.
  • Absolute or relative?: From the example notebooks, it seems like target_positions are displacements relative to the initial centroid point at the start of a sequence. Is this also the case for target_yaws and yaw, where we can calculate the absolute yaw heading at time step i in a sequence by taking yaw + target_yaws[i]?
  • target_availabilities: This field has a dimension of (batch, future_num_frames, 3). From its name I'm guessing it's a binary mask, but what does the last dimension correspond to? x, y, yaw respectively? Additionally, what does it mean for target_availability=0, does this correspond to an instance of the sensors' failure to calculate this dimension of the agent at this point in time in real-life?

I think it would be helpful to document details such as the ones above (and others such as EgoDataset() having -1 for track_id) somewhere. Thank you!

Timestamps for the frames seem off

Hello @lucabergamini,
The timestamps for each of the frames seem a little off. For the first few frames, I got 1256678902801892606, 1256678902901714926, and 1256678903001499246 which were all sometime in October 27, 2009. The rest of the timestamps are similarly off. What am I doing wrong?

My code:
frames = zarr_dataset.frames
for idx_coord, idx_data in enumerate(tqdm(range(len(frames)), desc="getting centroid to plot trajectory")):
time = zarr_dataset.frames[idx_data][0]
print(time)

The converter I used:
https://www.epochconverter.com/

Thanks,
Shreya

Cannot find file "sample.zarr"

Hello,

I have downloaded the prediction data set and extracted the following files: "aerial_map.tar", "semantic_map.tarr", "training1-1.tar", "training1-2.tar" and "validate.tar".

I got every file and folder that is displayed in your Readme, except the "sample.zarr".
I cannot look into the tar files with winRar, where should this "sample.zarr" be located ?

When I start an example script, I get the according error about the missing sample.zarr

Error while visualization.

Hello Team, I am getting an "Argument radius is required to be an integer" error while visualizing the scenes.
lyft-error

On Windows 10, AttributeError: Can't pickle local object 'StringEncoder.<locals>.EncodeField'

Hello,

I’m using windows10, torch 1.5.1+cu101 and torchvision 0.6.1+cu101.

I tried to run the example scripts. visualise_data worked well. But, in the agent_motion_prediction, I get an error when converting dataloader to an iterator as follows:

AttributeError                            Traceback (most recent call last)
<ipython-input-19-b7d59c605d01> in <module>
      1 # ==== TRAIN LOOP
----> 2 tr_it = iter(train_dataloader)
      3 progress_bar = tqdm(range(cfg["train_params"]["max_num_steps"]))
      4 losses_train = []
      5 for _ in progress_bar:

~\Anaconda3\envs\Kaggle_Lyft_201126A\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self)
    277             return _SingleProcessDataLoaderIter(self)
    278         else:
--> 279             return _MultiProcessingDataLoaderIter(self)
    280 
    281     @property

~\Anaconda3\envs\Kaggle_Lyft_201126A\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
    717             #     before it starts, and __del__ tries to join but will get:
    718             #     AssertionError: can only join a started process.
--> 719             w.start()
    720             self._index_queues.append(index_queue)
    721             self._workers.append(w)

~\Anaconda3\envs\Kaggle_Lyft_201126A\lib\multiprocessing\process.py in start(self)
    119                'daemonic processes are not allowed to have children'
    120         _cleanup()
--> 121         self._popen = self._Popen(self)
    122         self._sentinel = self._popen.sentinel
    123         # Avoid a refcycle if the target function holds an indirect

~\Anaconda3\envs\Kaggle_Lyft_201126A\lib\multiprocessing\context.py in _Popen(process_obj)
    222     @staticmethod
    223     def _Popen(process_obj):
--> 224         return _default_context.get_context().Process._Popen(process_obj)
    225 
    226 class DefaultContext(BaseContext):

~\Anaconda3\envs\Kaggle_Lyft_201126A\lib\multiprocessing\context.py in _Popen(process_obj)
    325         def _Popen(process_obj):
    326             from .popen_spawn_win32 import Popen
--> 327             return Popen(process_obj)
    328 
    329     class SpawnContext(BaseContext):

~\Anaconda3\envs\Kaggle_Lyft_201126A\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
     91             try:
     92                 reduction.dump(prep_data, to_child)
---> 93                 reduction.dump(process_obj, to_child)
     94             finally:
     95                 set_spawning_popen(None)

~\Anaconda3\envs\Kaggle_Lyft_201126A\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
     58 def dump(obj, file, protocol=None):
     59     '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60     ForkingPickler(file, protocol).dump(obj)
     61 
     62 #

AttributeError: Can't pickle local object 'StringEncoder.<locals>.EncodeField'

A few questions

Thanks very much for this contribution to the research community and for this devkit. A few assorted questions about the 1001 hours dataset:

(1) Would it be possible to add the rendered notebooks with output in Github, rather than just the code?
(2) Would you all be able to make a map notebook, to better understand the format, e.g. road vs. lane geometry, like this notebook?
(3) To what precision is the aerial imagery aligned with the published map? It looks like Page Mill Road's road geometry doesn't align with the aerial map here:

Screen Shot 2020-07-02 at 11 08 18 AM

(4) I ran into the following bug when installing requirements on Mac when I follow the setup instructions at pip install -r requirements.txt

  During handling of the above exception, another exception occurred:
  
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 362, in <module>
      run_setup(with_extensions)
    File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 355, in run_setup
      license='MIT',
    File "/Users/johnlamb/miniconda3/lib/python3.7/site-packages/setuptools/__init__.py", line 145, in setup
      return distutils.core.setup(**attrs)
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/core.py", line 148, in setup
      dist.run_commands()
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
      self.run_command(cmd)
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/Users/johnlamb/miniconda3/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 192, in run
      self.run_command('build')
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build.py", line 135, in run
      self.run_command(cmd_name)
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 279, in run
      build_ext.run(self)
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build_ext.py", line 340, in run
      self.build_extensions()
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
      self._build_extensions_serial()
    File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
      self.build_extension(ext)
    File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 289, in build_extension
      raise BuildFailed()
  __main__.BuildFailed
  
  ----------------------------------------
  Failed building wheel for numcodecs
  Running setup.py clean for numcodecs
Failed to build numcodecs
awscli 1.16.209 has requirement colorama<=0.3.9,>=0.2.5, but you'll have colorama 0.4.3 which is incompatible.
readme-renderer 26.0 has requirement Pygments>=2.5.1, but you'll have pygments 2.4.0 which is incompatible.

and

    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 362, in <module>
        run_setup(with_extensions)
      File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 355, in run_setup
        license='MIT',
      File "/Users/johnlamb/miniconda3/lib/python3.7/site-packages/setuptools/__init__.py", line 145, in setup
        return distutils.core.setup(**attrs)
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/core.py", line 148, in setup
        dist.run_commands()
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
        self.run_command(cmd)
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/Users/johnlamb/miniconda3/lib/python3.7/site-packages/setuptools/command/install.py", line 61, in run
        return orig.install.run(self)
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/install.py", line 545, in run
        self.run_command('build')
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build.py", line 135, in run
        self.run_command(cmd_name)
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 279, in run
        build_ext.run(self)
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build_ext.py", line 340, in run
        self.build_extensions()
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
        self._build_extensions_serial()
      File "/Users/johnlamb/miniconda3/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
        self.build_extension(ext)
      File "/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py", line 289, in build_extension
        raise BuildFailed()
    __main__.BuildFailed
    
    ----------------------------------------
Command "/Users/johnlamb/miniconda3/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-record-z8wxw4rd/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/m9/l00nd84d453fcx907pmlyd2h0000gq/T/pip-install-3ua4_jeg/numcodecs/

How to train using Tensorflow

I am currently working on Lyft's kaggle competition to predict the motion of AV. All the models there are implemented using pytorch, however I usually work in tensorflow and want to know how I can prepare a tensorflow model for this task. Thanks!

AgentDataset with rasterizer set to None

Code crashes when rasterizer is set to None as a special case for handling data["image"]=None is missing in line 84 ego.py --> see specific error below

l5kit/dataset/ego.py in get_frame(self, scene_index, state_index, track_id)
     82         data = self.sample_function(state_index, frames, self.dataset.agents, self.dataset.tl_faces, track_id)
     83         # 0,1,C -> C,0,1
---> 84         image = data["image"].transpose(2, 0, 1)

Lane color configuration

Hi,

The images that I have created with the 'py_semantic' rasterizer are something like this:
img030

What is the setting to create images like below where lanes in each direction have specific color? (this link)
motion_dataset_lrg_redux

Thanks

bug in agent sampling?

I found this line in the file l5kit/l5kit/l5kit/sampling/agent_sampling.py line 193, "agents" is both the iterated element and the list on which the loop is done. I'm not sure what this means but this should probably be corrected.

for i, (frame, agents) in enumerate(zip(frames, agents)):

[Error 5] Access is denied for create_chopped_dataset due to zarr append

en:
window 7 , python 3.7, lyftkit 1.10, zarr 2.4, torchvision 0.7

in the example "agent_motion_prediction" at evaluation, line:
eval_base_path = create_chopped_dataset(dm.require(eval_cfg["key"]), cfg["raster_params"]["filter_agents_threshold"], num_frames_to_chop, cfg["model_params"]["future_num_frames"], MIN_FUTURE_STEPS)
4 elements of chunk dataset is appened in zarr_utils, it works well with "sample.zarr", however when i use validate.zarr (much bigger size), i got [Error 5] Access is denied at the directory of "validate_chopped_100"
i create another chunk data set without linking to any storage on harddrive and dont get this problem, thus i strongly believe it is zarr's issue of append method
i did some modifications on zarr_utils and zarr_dataset in lyftk
it to get rid of this issue, let me know if you want to me contribute or you want to see my modifications (you may optimize it better). I am willing to share and discuss

How can I get history data from dataloader (example/agent_motion_prediction.ipynb)

Hello, I wonder how can I get history position and yaw of the vehicles from dataloader.
In agent_motion_prediction example, the number of the history frames can be set at agent_motion_config.yaml file.
However, when I get data from dataloader built from build_dataloader function, I cannot find history data.

Here is a code:

>>> tr_it = iter(train_dataloader)
>>> data = next(tr_it)

>>> print(data.keys())
dict_keys(['image', 'target_positions', 'target_yaws', 
  'target_availabilities', 'world_to_image', 'track_id', 
  'timestamp', 'centroid', 'yaw', 'extent'])

and here are configurations (agent_motion_config.yaml)

###################
## Model options
model_params:
  model_architecture: "resnet50"

  history_num_frames: 3
  history_step_size: 1
  history_delta_time: 0.1

  future_num_frames: 50
  future_step_size: 1
  future_delta_time: 0.1

How can I get history data?
Thanks!

Not able to install the package due to torch version

When installing the package I get the following output:

(base) C:\StudioProjects> pip install l5kit
Collecting l5kit
  Using cached l5kit-1.0.6-py3-none-any.whl (81 kB)
Requirement already satisfied: setuptools in c:\users\nosou\anaconda3\lib\site-packages (from l5kit) (49.6.0.post20200814)
Collecting pymap3d
  Using cached pymap3d-2.4.1.tar.gz (30 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Requirement already satisfied: tqdm in c:\users\nosou\anaconda3\lib\site-packages (from l5kit) (4.48.2)
Requirement already satisfied: imageio in c:\users\nosou\anaconda3\lib\site-packages (from l5kit) (2.9.0)
Requirement already satisfied: scipy in c:\users\nosou\anaconda3\lib\site-packages (from l5kit) (1.3.1)
Collecting ptable
  Downloading PTable-0.9.2.tar.gz (31 kB)
Requirement already satisfied: matplotlib in c:\users\nosou\anaconda3\lib\site-packages (from l5kit) (3.3.1)
ERROR: Could not find a version that satisfies the requirement torch<1.6.0,>=1.5.0 (from l5kit) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch<1.6.0,>=1.5.0 (from l5kit)

My torch versions are the following:

(base) C:\StudioProjects> pip freeze |grep torch
efficientnet-pytorch==0.6.3
facenet-pytorch==1.0.1
torch==1.6.0
torchvision==0.7.0

It seems like torch versions are not in range indeed. Does it mean I have to downgrade the torch?

Thanks!

[Bug?] Dataset provides speed by peeking into the future

In agent_sampling.py, the speed is calculated from future frames:
"speed": np.linalg.norm(future_vels_mps[0]),

In my opinion this is a design fault, and speed should be calculated from the last velocity vector in the history.

In the current state this should not be used as a training input.

[Error 5] Access is denied in create_chopped_dataset due to zarr array append

en:
window 7 , python 3.7, lyftkit 1.10, zarr 2.4, torchvision 0.7

in the example "agent_motion_prediction" at evaluation, line:```
eval_base_path = create_chopped_dataset(dm.require(eval_cfg["key"]), cfg["raster_params"]["filter_agents_threshold"],
num_frames_to_chop, cfg["model_params"]["future_num_frames"], MIN_FUTURE_STEPS)

4 elements of chunk dataset is appened in zarr_utils, it works well with "sample.zarr", however when i use validate.zarr (much bigger size), i got [Error 5] Access is denied at the directory of "validate_chopped_100" 
i create another chunk data set without linking to any storage on harddrive and dont get this problem, thus i strongly believe it is zarr's issue of append method
i did some modifications on zarr_utils and zarr_dataset in lyftkit to get rid of this issue, let me know if you want to me contribute or you want to see my modifications (you may optimize it better). I am willing to share and discuss 

get_frame_indices issue

In agent_motion_prediction.ipynb:
The below seems to only work with valuse -51 to -23348 (with indices capped at -23846)
agent_indices = eval_agent_dataset.get_frame_indices(frame_number) .

A quick inspection shows that in agent.py:

self.agents_indices has values with max:1795829, min:33543, shape:(13985,) yet agents_start_index and agents_end_index are 0 and 37 respectively (frames[frame_idx]["agent_index_interval"] = array([0, 38]))
Hence there will be no valid indices and the below will return an empty array:

    def get_frame_indices(self, frame_idx: int) -> np.ndarray:
        """
        Get indices for the given frame. Here __getitem__ iterate over valid agents indices.
        This means ``__getitem__(0)`` matches the first valid agent in the dataset.
        Args:
            frame_idx (int): index of the scene

        Returns:
            np.ndarray: indices that can be used for indexing with __getitem__
        """
        frames = self.dataset.frames
        assert frame_idx < len(frames), f"frame_idx {frame_idx} is over len {len(frames)}"

        agents_start_index = frames[frame_idx]["agent_index_interval"][0]
        agents_end_index = frames[frame_idx]["agent_index_interval"][1] - 1

        mask_valid_indices = (self.agents_indices >= agents_start_index) * (self.agents_indices <= agents_end_index)
        indices = np.nonzero(mask_valid_indices)[0]
        return indices

I'm unfamiliar with the code but the get_frame_indices in the parent class is different to that in the AgentDataset class.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.