neurodatawithoutborders / pynwb Goto Github PK
View Code? Open in Web Editor NEWA Python API for working with Neurodata stored in the NWB Format
Home Page: https://pynwb.readthedocs.io
License: Other
A Python API for working with Neurodata stored in the NWB Format
Home Page: https://pynwb.readthedocs.io
License: Other
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
While working on the iterative write for the BouchardLab data I came across a couple of gotchas. This ticket is primarily to track things to look out for and to spawn a discussion on how we should deal with the problem of iterative/streaming write:
Which datasets should support iterative write? Datasets that we want to allow to be written iteratively must accept Iterable type input. It is likely that many classes currently only specify, e.b, ndarray and list as types for objects that one might want to write iteratively.
Iterators and dimension checking Strict checking of dimensions may not be possible in init when we have an Iterable (rather than an array)
Avoid loading all data from iterators We should not call list(object)
on an Iterable in the constructor as that leads to loading of the data. E.g,. ephys.FeatureExtraction.init called self.fields['features'] = list(features)
Iterators and read API Once we are dealing with read API's, allowing Iterables for datasets will likely get even more tricky. E.g., implementing array slicing when we are given an Iterable will be non-trivial (and doing so in every NWB API class will be problematic) Not an issue, because read occurs from NWB-HDF5 files only.
Reusability It would be nice to implement as much of the iterative and streaming write behavior agnostic to HDF5 so that the mechanisms become reusable.
Configurability The iterative write is not very configurable right now. E.g, one may want to a more save write option and flush the data regularly to ensure we have a valid file even if a data stream breaks for some reason?
Simultaneous Streaming Write The current iterative write addresses the need were we don't want to load all data into memory in order to ingest data into a file (and in part the case where we only have a single data stream). However, it does not address the use-case of streaming data from an instrument. One central challenge seems to be that multiple dataset streams are not supported, i.e,. the iterative write is currently limited to one-dataset-at-a-time, however, data streaming will require simultaneous, iterative update of multiple datasets, e.g., even in a single TimeSeries one might need to update the "data" and "timestamps" as data arrives. This means, rather than iterating over one dataset until it is finished and then writing the next dataset, one will need to iterate over a collection of streams and update data as it arrives. NOTE: The datasets may also not be all part of the same group, e.g., when recording from multiple devices simultaneously one will want to create multiple timeseries in a streaming fashion.
re 5) Rather than writing datasets in an iterative fashion directly when a user asks to write a file, one could initialize all datasets that need to be written iteratively and then after all static data is done and all datasets have been initialized we could do the iteration over the datasets simultaneously. However, this behavior should be configurable, e.g., when streaming data from an instrument we have to do simultaneous streams but when we stream data from files (where the main purpose is to conserver memory) the current behavior of one-dataset-at-a-time is preferable.
re 1,3) This is something we should pay attention to now in order to avoid having to do repeated updates.
re 1-4) Maybe we should restrict the use of data iteration to allow only DataChunkIterators as iteratable inputs. This would remove the ambiguity of having to make complex decisions based on type and allow us to deal with these issues in a more structured fashion. The overhead for users to wrap an iterator in a DataChunkIterator is fairly low, so it seems Ok to make it a requirement. Also, this enforces that users make a distinct decision when they want to write data iteratively.
re 4) Maybe we need a mechanism that ensures that we allow Iterable objects only for write purposes. What would be the right mechanism for this? Or maybe, if we only allow DataChunkIterator, then we could possibly add mechanisms to load data into memory as needed?
re 6) One solution might be to put as much functionality into DataChunkIterator. To do the iteration over multiple iterators for a true simultaneous streaming write, we could maybe add class for managing DataChunkIterators that would deal with coordinating the write of the multiple streams, e.g, IterativeWriteManager. In this way different backends could implement their one IterativeWriteManagers that could reuse the functionality that is already there and just add their own custom write routines?
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
As a step in the direction of allowing individuals to stream into NWB files, provide some functionality to create an NWB file with empty datasets, which can then be written to by other APIs used in acquisition systems.
Also, provide ways of moving existing HDF5 datasets into correct locations to comply with NWB format specification.
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
When using 3to2 to convert pynwb from Python 3 to2 we see the following issues in the unit tests:
This test seems to fail for several reasons. 1) 3to2 somehow removes the "@self.type_map.neurodata_type('MyNWBContainer')" decorator from the local calls MyNWBContainterMap and 2) because something seems to fail with the registration itself.
Traceback (most recent call last):
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/tests/unit/test_io.py", line 79, in test_register
self.assertIsInstance(cls, MyNWBContainerMap)
AssertionError: <pynwb.io.build.map.ObjectMapper object at 0x10a5e7cd0> is not an instance of <class 'test_io.MyNWBContainerMap'>
The following tests appear to fail because validation of str vs unicode does not happen properly after applying 3to2:
Traceback (most recent call last):
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/tests/unit/test_io.py", line 85, in test_override
builder = self.type_map.build(container_inst, self.build_manager)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/io/build/map.py", line 76, in build
attr_map = self.get_map(container)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/io/build/map.py", line 60, in get_map
spec = self.__catalog.get_spec(ndt)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/core.py", line 214, in func_call
raise TypeError(u', '.join(parse_err))
TypeError: incorrect type for 'obj_type' (got 'str', expected 'unicode or type')
Traceback (most recent call last):
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/tests/unit/test_io.py", line 78, in test_register
cls = self.type_map.get_map(container_inst)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/io/build/map.py", line 60, in get_map
spec = self.__catalog.get_spec(ndt)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/core.py", line 214, in func_call
raise TypeError(u', '.join(parse_err))
TypeError: incorrect type for 'obj_type' (got 'str', expected 'unicode or type')
Traceback (most recent call last):
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/tests/unit/test_io.py", line 48, in test_default_mapping
builder = self.type_map.build(container_inst, self.build_manager)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/io/build/map.py", line 76, in build
attr_map = self.get_map(container)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/io/build/map.py", line 60, in get_map
spec = self.__catalog.get_spec(ndt)
File "/Users/oruebel/Devel/nwb/test_pynwb_py3to2/pynwb/core.py", line 214, in func_call
raise TypeError(u', '.join(parse_err))
TypeError: incorrect type for 'obj_type' (got 'str', expected 'unicode or type')
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
import pynwb
print(pynwb)
import pkg_resources
print(pkg_resources.get_distribution('pynwb').version)
produces the following:
/local1/anaconda2/envs/pynwb/bin/python /data/mat/nicholasc/pynwb/nicholasc/sandbox.py
<module 'pynwb' from '/data/mat/nicholasc/pynwb/src/pynwb/__init__.py'>
Traceback (most recent call last):
File "/data/mat/nicholasc/pynwb/nicholasc/sandbox.py", line 7, in <module>
print(pkg_resources.get_distribution('pynwb').version)
File "/local1/anaconda2/envs/pynwb/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 557, in get_distribution
File "/local1/anaconda2/envs/pynwb/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 431, in get_provider
File "/local1/anaconda2/envs/pynwb/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 968, in require
File "/local1/anaconda2/envs/pynwb/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 854, in resolve
pkg_resources.DistributionNotFound: The 'pynwb' distribution was not found and is required by the application
Process finished with exit code 1
from Nicholas Cain:
"The pynwb module is definitely in the python path, but pkg_resources can't find it
setuptools v27.2.0, python 3.6.1 (anaconda)"
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
When NWBContainer derived classes have the same name, the ObjectMapper ambiguous choses a type, since it is simply using a dict with a mapping of class name to class.
We will need to determine how to handle this. Do we allow it? If so, we need to develop a mechanism to handle this ambiguity
Originally reported by: [email protected] (Bitbucket: nicholashcain, GitHub: Unknown)
MRE at gist:
https://gist.github.com/nicain/65dd4758590641af1f612827a4b467a2
Originally reported by: [email protected] (Bitbucket: nicholashcain, GitHub: Unknown)
Gist to reproduce the issue, and test against:
https://gist.github.com/nicain/be598d5b7be5f2ffe1fd7e8d27c87ec3
Originally reported by: maxdougherty (Bitbucket: maxdougherty, GitHub: maxdougherty)
Currently a TimeSeries requires the timestamps parameter to be specified, even if a start_time and rate have been specified. This contradicts the example shown in the pyNWB docs.
-#47
Originally reported by: Loren Frank (Bitbucket: lmfrnk, GitHub: Unknown)
The call to create an interval series now requires a data element, but the documentation only specifies a name and a source.
In addition, with a data element specified, I now get the following error:
TypeError: either 'timestamps' or 'starting_time' and 'rate' must be specified.
I'm also getting similar errors when I execute
nwbf.create_electrode_group(name, coord, desc, recording_device, area, 1.0):
File "/Users/loren/Src/NWB/franklab/convert_nspike.py", line 342, in
electrode_group[day][tet_ind+1] = nwbf.create_electrode_group(name, coord, desc, recording_device, area, 1.0)
File "/Users/loren/anaconda/lib/python3.5/site-packages/pynwb-0.0.1-py3.5.egg/form/utils.py", line 215, in func_call
File "/Users/loren/anaconda/lib/python3.5/site-packages/pynwb-0.0.1-py3.5.egg/pynwb/file.py", line 260, in create_electrode_group
File "/Users/loren/anaconda/lib/python3.5/site-packages/pynwb-0.0.1-py3.5.egg/form/utils.py", line 214, in func_call
TypeError: missing argument 'channel_impedance', missing argument 'description', missing argument 'location', missing argument 'device'
Originally reported by: Loren Frank (Bitbucket: lmfrnk, GitHub: Unknown)
Traceback (most recent call last):
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 280, in scalar_fill
dset = parent.require_dataset(name, data=data, shape=None, dtype=dtype)
File "/Users/loren/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py", line 136, in require_dataset
raise TypeError("Shapes do not match (existing %s vs new %s)" % (dset.shape, shape))
TypeError: Shapes do not match (existing () vs new None)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/loren/Src/NWB/franklabnwb/convert_nspike.py", line 486, in
io.write(nwbf)
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 231, in func_call
return func(self, **parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/form/backends/io.py", line 25, in write
self.write_builder(f_builder)
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 231, in func_call
return func(self, **parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 116, in write_builder
write_group(self.__file, name, gbldr.groups, gbldr.datasets, gbldr.attributes, gbldr.links)
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 238, in func_call
return func(**parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 157, in write_group
builder.links)
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 238, in func_call
return func(**parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 157, in write_group
builder.links)
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 238, in func_call
return func(**parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 157, in write_group
builder.links)
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 238, in func_call
return func(**parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 164, in write_group
builder.get('attributes'))
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 238, in func_call
return func(**parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 251, in write_dataset
dset = scalar_fill(parent, name, data)
File "/Users/loren/Src/NWB/pynwb/src/form/backends/hdf5/h5tools.py", line 282, in scalar_fill
raise Exception("Could not create scalar dataset %s in %s" % (name, parent.name)) from exc
Exception: Could not create scalar dataset reference_frame in /processing/Behavior Module/Position/Position Data
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
A common analysis is the factorization of timeseries matrizes via PCA, NMF, SVD, CUR etc. matrix factorizations. These decompositions often result in the creation of typically 2-3 matrices, one for row-space, column-space, and weights. It is unclear where and how to store this kind of analysis right now.
Originally reported by: dcamp_lbl (Bitbucket: dcamp_lbl, GitHub: Unknown)
Create the base OpticalSeriesclass
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
Create data convert scripts (and/or notebook) to convert Kris's data we used in the BRAINformat paper to convert it to NWB.
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
pynwb.io.h5tools .__get_type specifies _h5py.special_dtype(vlen=bytes) as datatype for string objects. It looks like this implies: 1) one cannot store non-ASCII characters, and 2) when loading the string dataset we get back a bytestring so that one has to call decode('utf-8') first to get a unicode string and a user needs to know the encoding. In Python2 h5py used different datatype for variable length ASCII and unicode strings. Is this no longer supported?
Originally reported by: dcamp_lbl (Bitbucket: dcamp_lbl, GitHub: Unknown)
Create the base ImageMaskSeries class
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
pynwb.io.base.py imports the
from .tools.map import H5Builder
from . import type_map
I believe the import for type_map should probably be something like:
from pynwb.io.build.map import TypeMap as type_map
I'm nut sure where H5Builder has moved to and what it's new name is.
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
NWB files contain a currently freeform area in /general/specifications for storing specifications. To ease use of the specification data it would be useful to:
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
It looks like NWBGroupSpec.neurodata_type_inc returns the value of the neurodata_type in cases where a Group is a base type that does not inherit from anything. E.g., when I do th following:
print(rt_spec.neurodata_type_def, rt_spec.neurodata_type_inc, rt_spec.neurodata_type)
I get, e.g, for ROI, SpikeUnit, NWBFile etc. the same value for all three attributes. I would expect that neurodata_type_inc should be None in cases where a spec does not inherit from any time rather than being equal to the type.
Also for types that are deeper in the hierarchy, neurodata_type_inc appear to return the first ancestor, e.g, for CurrentClampSeries
neurodata_type_inc return TimeSeries instead of PatchClampSeries.
The YAML for the above seemed to be fine, but the code seems to return the wrong value. However, e.g, for EventDetection the neurodata_type_inc should be Interface but seems to be missing in the YAML.
Originally reported by: jeffteeters (Bitbucket: jeffteeters, GitHub: jeffteeters)
I tried implementing the example code on this page:
http://pynwb.readthedocs.io/en/latest/example.html#creating-and-writing-nwb-files
(under section “Write an NWBFile”) to create a TimeSeries.
This generates a TimeSeries that does not include “num_samples”, which is a required dataset. The sample code is below. The example would not run as written, so perhaps the code in the documentation needs to be updated. To allow it to run, I had to add several import statements (has comment "ADDED"). I also tried adding a specification of num_samples in the call to create the TimeSeries. Both with and without the addition of the argument (num_samples), the API failed to create num_samples dataset. There was no error or warning displayed that the required dataset “num_samples” was missing.
#!python
from pynwb import NWBFile, get_build_manager
from form.backends.hdf5 import HDF5IO
from datetime import datetime # ADDED
from pynwb import base # ADDED
# make an NWBFile
start_time = datetime(1970, 1, 1, 12, 0, 0)
create_date = datetime(2017, 4, 15, 12, 0, 0)
nwbfile = NWBFile('test.nwb', 'a test NWB File', 'TEST123', start_time, file_create_date=create_date)
ts = **base.**TimeSeries('test_timeseries', 'example_source', list(range(100,200,10)), 'SIunit', timestamps=list(range(10)), resolution=0.1, num_samples=3) # ADDED num_samples
nwbfile.add_raw_timeseries(ts)
manager = get_build_manager()
path = "test_pynwb_io_hdf5.h5"
io = HDF5IO(path, manager)
io.write(nwbfile)
io.close()
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
A typical analysis is to perform a frequency decomposition (or other form of decomposition of signals), e.g,. of electrical recordings of the brain. This kind of analysis results in a timeseries with an additional dimension to represent the bands/features the signal is decomposed into. A possible target for this is FilteredEphys, but, e.g., the ElectricalSeries does only allow for 2D data.
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
To ease use/management of subject metadata in /general/subject we should (similar to the other groups in /general):
This would also help keep the NWBFile interface simpler
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
Within the specification, there are fields that can be determined from other datasets that might be written from a DataChunkIterator. For example, the field .num_samples can be determined from .data. If data is passed in as a DataChunkIterator, num_samples can only be determined when the DataChunkIterator is exhausted.
We need a class that can analyze a DataChunkIterator while it is being read from. Once the DataChunkIterator is closed, this class should be able to call back to some other object for further processing (e.g. setting a variable), or provide a way for its result to be read.
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
The problem this ticket describes is the following. In serveral cases, NWBContainers are closely tied to the parent container E.g, SpikeUnit to UnitTimes, Device to NWBFile etc. In these cases, the API often provides dedicated methods on the main type to create these contained types, however, if the user creates the contained objects independently using the corresponding class then we need to make sure that the parent gets set correctly to ensure the objects are written correctly. For example:
#!python
spike_unit_objs = [SpikeUnit(....) for s in spike_unit_data]
unit_times = UnitTimes(source="Data as reported in the original crcns file",
spike_units=spike_unit_objs)
for i in spike_unit_objs: # We now need to set the parent for the spike_unit_obj
i.parent = unit_times
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
Need to complete the implementation of Epoch.links function in pynwb.epoch.py. The function should return a list of strings of the form “’ <TimeSeries_X> is path_to_TimeSeries ‘” (see http://nwb-schema.readthedocs.io/en/latest/format.html#epoch)
Originally reported by: Loren Frank (Bitbucket: lmfrnk, GitHub: Unknown)
#!python
from pynwb.spec import NWBGroupSpec, NWBNamespaceBuilder
# create a builder for the namespace
ns_builder = NWBNamespaceBuilder("Frank Laboratory NWB Extensions", "franklab", version='0.1')
fails with the following error:
Users/loren/anaconda3/bin/python /Users/loren/Src/NWB/franklabnwb/fl_extensions.py
Traceback (most recent call last):
File "/Users/loren/Src/NWB/franklabnwb/fl_extensions.py", line 4, in
ns_builder = NWBNamespaceBuilder("Frank Laboratory NWB Extensions", "franklab", version='0.1')
File "/Users/loren/Src/NWB/pynwb/src/form/utils.py", line 231, in func_call
return func(self, **parsed['args'])
File "/Users/loren/Src/NWB/pynwb/src/pynwb/spec.py", line 173, in init
args.insert(NWBNamespace)
TypeError: insert() takes exactly 2 arguments (1 given)
Originally reported by: dcamp_lbl (Bitbucket: dcamp_lbl, GitHub: Unknown)
Create the base RoiResponseSeries class
Originally reported by: dcamp_lbl (Bitbucket: dcamp_lbl, GitHub: Unknown)
Create the base ImageSeries class
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
Create the base ImageSeries class
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
A user should be able to specify additional datasets, pynwb.objects, (and possibly groups and attributes) that should be stored in addition to objects defined in the API and spec.
A user should then also have standard means to access objects that are added (i.e., are not in the spec) via the API.
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
So far, this issue exists:
https://gist.github.com/nicain/8965460ebcab2075fb478839843cad9f
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
Originally reported by: jeffteeters (Bitbucket: jeffteeters, GitHub: jeffteeters)
The change history on:
http://nwb-schema.readthedocs.io/en/latest/format.html#a-may-2017
Does not include the change made to the TimeSeries "ancestry" and "neurodata_type" attributes. In the previous version, all TimeSeries types had a "ancestry" attribute, and the value of it was an array giving the class ancestry hierarchy. Description in previous version documentation is:
"The class-hierarchy of this TimeSeries, with one entry in the array for each ancestor. An alternative and equivalent description is that this TimeSeries object contains the datasets defined for all of the TimeSeries classes listed. The class hierarchy is described more fully below. For example: [0]=TimeSeries, [1]=ElectricalSeries [2]=PatchClampSeries. The hierarchical order should be preserved in the array -- i.e., the parent object of subclassed element N in the array should be element N-1"
The purpose of it was so that applications that were written to use a particular TimeSeries type could still function on a subclass of that type. (The application could check the list to identify if it contains a type that application is setup to use). Also, this attribute is how the different TimeSeries types are identified in the previous version.
In the previous version the all TimeSeries had the same value for "neurodata_type", mainly, the value "TimeSeries". This is no longer the case in pynwb, but this change is also not described in the change history.
Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)
In form.backends.hdf5.h5tools, data type is automatically determined by __get_type, but raises an exception when a dataset is empty.
We need to determine how to handle the case when an empty dataset needs to be created.
This is the cause if Issue #34
Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)
Currently h5ools.list_fill fails on empty lists. This bug was triggered because the default value for file_create_date in NWBFile is an empty list. The reason why the code fails is that __get_type(...)
cannot determine a data type for the empty list so that it then raises an exception. The default behavior of h5py in this case seems to be essentially require_dataset(name, shape=(0,), dtype=float). For now, I changed the behavior of NWBFile in the filecreateupdate branch to avoid hitting this problem, but I'm not sure what the right behavior should be here for empty data, in part because for empty data we should actually look up the correct data_type from the spec (or DatasetBuilder) rather than from the data array itself.
#!python
def __list_fill__(parent, name, data):
data_shape = __get_shape(data)
try:
data_dtype = __get_type(data)
except Exception as exc:
raise Exception('cannot add %s to %s - could not determine type' % (name, parent.name)) from exc
try:
dset = parent.require_dataset(name, shape=data_shape, dtype=data_dtype)
except Exception as exc:
raise Exception("Could not create scalar dataset %s in %s" % (name, parent.name)) from exc
if len(data) > dset.shape[0]:
new_shape = list(dset.shape)
new_shape[0] = len(data)
dset.resize(new_shape)
dset[:] = data
return dset
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.