Git Product home page Git Product logo

nwb-schema's Introduction

NWB Schema Format

A format specification schema for the Neurodata Without Borders (NWB) data format.

To get started using NWB, please go to the NWB overview website.

This repo contains:

The NWB schema uses the [NWB specification language](http://schema-language.readthedocs.io/), which defines formal structures for describing the organization of complex data using basic concepts, e.g., Groups, Datasets, Attributes, and Links.

For more information:

  • Learn more about NWB and nwb.org.
  • The PyNWB Python API for the NWB format is available on Github
  • The MatNWB Matlab API for the NWB format is available on Github

The NWB 1.0 format and API are archived in the [NeurodataWithoutBorders/api-python](https://github.com/NeurodataWithoutBorders/api-python) repository. https://github.com/NeurodataWithoutBorders/api-python/blob/master/nwb/nwb_core.py contains the reference schema for the NWB 1 format.

License

“nwb-schema” Copyright (c) 2017-2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  3. Neither the name of the University of California, Lawrence Berkeley National Laboratory, U.S. Dept. of Energy nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

You are under no obligation whatsoever to provide any bug fixes, patches, or upgrades to the features, functionality or performance of the source code ("Enhancements") to anyone; however, if you choose to make your Enhancements available either publicly, or directly to Lawrence Berkeley National Laboratory, without imposing a separate written license agreement for such Enhancements, then you hereby grant the following license: a non-exclusive, royalty-free perpetual license to install, use, modify, prepare derivative works, incorporate into other computer software, distribute, and sublicense such enhancements or derivative works thereof, in binary and source code form.

Copyright

“nwb-schema” Copyright (c) 2017-2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Innovation & Partnerships Office at [email protected].

NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit other to do so.

nwb-schema's People

Contributors

ajtritt avatar bendichter avatar codycbakerphd avatar h-mayorquin avatar iawbrooks avatar jcfr avatar jhyearsley avatar mavaylon1 avatar nicain avatar oruebel avatar rly avatar stephprince avatar t-b avatar tjd2002 avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nwb-schema's Issues

Change small metadata datasets to attributes where appropriate

Change Change small metadata datasets to attributes where appropriate:

Reason: Storing small metadata as datasets (compared to attributes) can lead to: i) clutter in the file hierarchy making it harder for users to navigate files, ii) makes metadata appear as core data, and iii) causes poor performance when extracting metadata from files (reading attributes is more efficient in many cases).

Select Specific Changes Change top-level datasets identifier, nwb_version, session_description, session_start_time from datasets to attributes. @t-b suggests in #45 that the top-level dataset file_create_date should remain a dataset as this datasets may need to be update/extended repeatedly rather than just being just a a static piece of metadata.

Matrix factorization

Moved prior ticket from pynwb to nwb-schema issue tracker.

A common analysis is the factorization of timeseries matrizes via PCA, NMF, SVD, CUR etc. matrix factorizations. These decompositions often result in the creation of typically 2-3 matrices, one for row-space, column-space, and weights. It is unclear where and how to store this kind of analysis right now.

Format Release Notes: Update auto_gen docs

Update release notes to explicitly list the autogen datasets that have been removed:

NWBFile/epochs.tags
NWBFile/epochs/epoch_X.links
TimeSeries.timestamp_link
TimeSeries.extern_fields
TimeSeries.data_link
TimeSeries.missing_fields
Module.interfaces
TimeSeries/num_samples
ClusterWaveforms/clustering_interface_path
Clustering/cluster_nums
EventDetection/source_electricalseries_path
ImageMaskSeries/masked_imageseries_path
IndexSeries/indexed_timeseries_path
RoiResponseSeries/segmentation_interface_path
ImageSegmentation/image_plane/roi_list
MotionCorrection/image_stack_name/original_path
UnitTimes/unit_list

add help to the following NWBContainer subtypes

When NWBContainer (formerly Interface) became the base type for all times, it brought along the required help attribute.

The following types need this attribute "hardcoded" i.e. an attribute with value set.

  • nwb.behavior.yaml: MotionCorrection/CorrectedImageStack
  • nwb.ecephys.yaml: EventDetection
  • nwb.icephys.yaml: IntracellularElectrode
  • nwb.misc.yaml: UnitTimes/SpikeUnit
  • nwb.ogen.yaml: OptogeneticStimulusSite
  • nwb.ophys.yaml: ImageSegmentation/PlaneSegmentation
  • nwb.ophys.yaml: ImageSegmentation/PlaneSegmentation/ROI
  • nwb.ophys.yaml: ImagingPlane
  • nwb.ophys.yaml: ImagingPlane/OpticalChannel

file_create_date should not be changed to an attribute

I've implemented an API for Igor Pro 1 for reading and writing a NWB files. It implements our subset of the NWB specification language which we use.

In the course of implementing this API the dataset file_create_date was introduced and modification_time was deprecated due to my intervention.
See 2 for the changelog entry.

You now seem to partly undo that with the proposed NWB changes.

My original argumentation was:

The definition of modification_time as attribute is suboptimal.
* Attributes can only be read/written in full. This means I always have 
to read the whole array append to it and then write it back. This also 
means I can not just append to it as e.g. possible with datasets.
* Attributes should be smaller than 64kB, this can be worked around with 
some effort though [1]. This limit restricts the nwb files to fewer than 
~3000 modifications as my ISO8601 timestamps are 20 characters long.
Therefore I would propose to instead use the dataset 
"/modification_time". Datasets allow partial IO and don't have a size 
restriction. The attribute "modification_time" would only be allowed for 
backward compatibility.

In addition there are some limitations on how often you can overwrite an attribute 2

So I would be in favour of keeping file_create_date as an dataset.

Rename and extend Interface

[Hackathon]

  • Interface should be renamed to ease intuition.
  • The new class should be made base type of TimeSeries as well. This would allow current interfaces stored in modules to also become TimeSeries. This would also ease adding support for data provenance as a common concept to the format.

Proposed Changes

  • Rename Interface to DataContainter or NWBContainter
  • Change neurodata_type_inc of TimeSeries to NWBContainer and remove the source attribute from the spec of TimeSeries as the attribute will be inherited from NWBContainer

Format Docs: update comments and description in TimeSeries

GH-29 deployed a change that added a default_value for TimeSeries.description and TimeSeries.comments. They have been set to default to the following:

  • description: default_value='no description'
  • comments: default_value='no comments'

The changelog will need to be updated to reflect this.

Modify dataset spec language to allow compound data types to support table specification

Originally reported by: Andrew Tritt (Bitbucket: ajtritt, GitHub: ajtritt)


Goal: enhance dataset specification language to allow specification of table datasets

Proposed solution: Allow dtype key of the dataset specification language to take the following options:

  • scalars: maintains current functionality
  • list of dicts: Use this to specify column spec
    • each dict takes two keys:
      • column: specifies column name
      • dtype: specifies the data type of the column

HDF5 Storage: In HDF5 such a data type would be represented as a compound data type/


Update spec of shape for OpticalSeries.field_of_view in nwb.image.yaml

Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)


The current specification of OpticalSeries.field_of_view is

  • dims:
      • width|height
      • width|height|depth
        doc: Width, height and depto of image, or imaged area (meters).
        dtype: float32
        name: field_of_view
        quantity: '?'
        shape:
    • 2

For the shape to be consistent with the dims, the shape should change to:

shape:

    • 2
    • 3

schema validation

#6 & @t-b's questions has me thinking.

Is there currently a mechanism for validating one NWB file (or experiment dataset) against an arbitrary schema?

For example...

  • If I have an NWB file that someone else has made w/ a custom extension, can I validate that it conforms to the NWB spec?
  • One could imagine a schema change between 2.b and 2.a where only part of the schema changes, such that some 2.b files are 2.a compliant. For example, perhaps only ElectrodeGroup changes, such that files using ElectrodeGroup built on 2.b are not 2.a compliant but 2.b files that don't use ElectrodeGroup are 2.a compliant.
  • as additional APIs are written (such as Matlab & IGOR), we'll need a way to validate compliance

Clarify format of `session_start_time` and `file_creation_date`

The NWB spec says "Date + time, Use ISO format (eg, ISO 8601) or a format that is easy to read and unambiguous."

Date + time, Use ISO format (eg, ISO 8601) or a format that is easy to read

- doc: 'Time of experiment/session start, UTC. COMMENT: Date + time, Use ISO format
)

This definition is too vague for interaction across different APIs.

I therefore propose to change the wording toStrict ISO8601 format including timezone information and T separator: “2017-08-28T23:24:47+02:00". Using a 64bit POSIX timestamp has the drawback that the reader needs to account for leap seconds, but would be equally fine for me.

Note: This issue is different than #49 as it does not change the specification but just clarifies it.

Add validtimes() method to most / all data objects

Originally reported by: Loren Frank (Bitbucket: lmfrnk, GitHub: Unknown)


It will be important for a user (and for the query framework) to be able to easily determine when a given data set is valid. In some cases this can be determined from the TimeStamp data in an object field, but in other cases, and in particular for SpikeWaveForm/EventDetection objects, the first and last times of events cannot be used for this purpose. As such, a general method that knows to look in, for example, a locally stored IntervalSeries that is part of each data object would be very helpful.


redundant requirements in ophys.ROI

the following attributes are required to define an ROI

    pix_mask (Iterable): List of pixels (x,y) that compose the mask.
    pix_mask_weight (Iterable): Weight of each pixel listed in pix_mask.
    img_mask (Iterable): ROI mask, represented in 2D ([y][x]) intensity image.

img_mask is redundant with pix_mask+pix_mask_weight

it should be required to define img_mask OR pix_mask+pix_mask_weight

Rename Module

[Hackathon] To ease intuition Module should be renamed to more directly identify the purpose of a Module.

Proposed new name

  • ProcessingModule --- i) would help indicate purpose of module and ii) preserve original base naming (i.e, Module)

"manifold" in ImagingPlane

can someone please clarify what this required attribute is supposed to be?

manifold (Iterable): Physical position of each pixel. height, weight, x, y, z.

is this supposed to be a 5 x n_pixels array?

what is the ordering of the pixels?

Add LabMetadata type to /general and define neurodata_types for groups in /general

[Hackathon] To help with issues of multiple inheritance an ease customization of /general metdata we should

  • Add a new group with neurodata_type_def: LabMetadata and quantity: zero_or_many to /general. This would allow labs to add custom metadata by extending LabMetadata rather than having to modify /general directly
  • Assign neurodata_type_def to the various groups in /general (e.g., subject, see e.g., issue NeurodataWithoutBorders/pynwb#45) and add corresponding container classes in PyNWB to ease management of /general metadata.

Remove autogen datasets from the specification

Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)


[Hackathon] At the hackathon it was decided that the datesets created by autogen in the orginal version should be removed. Users will "raise their hand" in case that any specific of these datasets are needed.

Need

  • Remove datasets/attributes listed below from the schema (if not removed already)
  • Update the release notes for autogen to indicate which specific datasets have been removed
  • Update the PyNWB API to remove the removed datasets/attributes also from the API

List of relevant autogen datasets

  • Redundant storage of the path of links stored in the same group:

    • /indexed_timesries_path: Path to which the link “indexed_timeseries” points to
    • /segmentation_interface_path: Path to which the link “segmentation_interface” points to
    • /masked_imageseries_path: Path to which the link “masked_imageseries” points to
    • /clustering_interface_path: Path to which the link “clustering_interface” points to
    • /source_electricalseries_path: Path to which the link “source_electricalseries” points to
    • /<image_stack_name>/original_path: Path to which the link “original” points to
    • /epochs/epoch_x/links: Array of strings of the from “A is path
  • List groups/datasets (of a given type or property) in the current group

    • /interfaces: Names of the groups in the module/group
    • /<image_plane>/roi_list: Names of the roi_name groups in this group
    • /unit_list: Names of the unit_N groups
    • /external_fields: List of fields that are stored as external links
    • /data_link: List of TimeSeries that link to a data field
    • /timestamp_link: List of TimeSeries that link to the same timestamp field
  • /missing_fields: List of required/recommended fields that are missing in the time series (i.e., the file is not NWB compliant)

  • /epochs/epoch_x/tags : A sorted list of the different tags used by epochs

  • /num_samples: Number of samples in data (scalar)

  • /cluster_num: Unique values of the dataset num (i.e., cluster indexes used)


Signal decomposition

Port of NeurodataWithoutBorders/pynwb#9 to nwb-schema issue tracker.

A typical analysis is to perform a frequency decomposition (or other form of decomposition of signals), e.g,. of electrical recordings of the brain. This kind of analysis results in a timeseries with an additional dimension to represent the bands/features the signal is decomposed into. A possible target for this is FilteredEphys, but, e.g., the ElectricalSeries does only allow for 2D data.

Addition of Interval series to SpikeWaveform and ElectrodeGroup objects

Originally reported by: Loren Frank (Bitbucket: lmfrnk, GitHub: Unknown)


There are a number of cases when a user would need to know exactly when a particular SpikeWaveform or ElectrodeGroup is valid. As an example, a given unit might be clusterable for only part of the time that it's ElectrodeGroup is valid, and those times may be distributed into multiple, disjoint periods. Similarly, an ElectrodeGroup might be valid for a part of a definied Epoch, and right now there is no way to indicate that is the case.

Note that as there might be multiple intervals where one of these objects was valid, we would need an integrated IntervalSeries, not an integrated Epoch object.


FormatDocs: LaTeX Too deeply nested

When including the nested list with the type hierarchy in Latex, Sphinx generates a too deeply nested list resulting in the following error:

! LaTeX Error: Too deeply nested.

See the LaTeX manual or LaTeX Companion for explanation.
Type  H <return>  for immediate help.
 ...                                              
                                                  
l.647 \item
            {}

I have already added "enumitem" to the preamble as part of the conf.py of the format docs, however, that does not seem to fix the issue. To address the issue for now, I have modified the generate_format_docs.py script to include the type hierarchy only in the HTML version of the docs via:

def render_namespace(...)
     ....
     if type_hierarchy_include:
        if type_hierarchy_include_in_html_only:
            ns_desc_doc.add_text('.. only:: html %s%s' % (ns_desc_doc.newline, ns_desc_doc.newline))
            ns_desc_doc.add_include(type_hierarchy_include, indent='    ')
            ns_desc_doc.add_text(ns_desc_doc.newline)
      ...

However, this does not really fix the problem but rather avoids it by excluding the problematic parts from the docs. Ideally we would find a workaround that would allow LaTeX to deal with the nested lists properly or modify render_type_hierarchy(...) to render the hierarchy in a way that it renders properly in both HTML and LaTeX.

NWBFile -> NWBSession

Moving here from NeurodataWithoutBorders/pynwb#106 since this is a schema issue.

The refactoring of the NWB infrastructure has brought a much-needed separation of the concerns of the API, schema, and backend. In particular, the schema is now ostensibly agnostic to the specifics of the backend implementation. Practically, the only supported backend is HDF5 via the HDF5IO object in the form.backends.hdf5 module, however this can now be replaced with e.g. a database or filestore backend.

Despite this, the schema maintains the now-archaic "File" semantics.

I propose that we rename NWBFile to be NWBSession (or NWBExperiment or NWBDataset) & update descriptions of various attributes that reference the NWB "file" accordingly.

Docs: Add description of support for compound types

  • Specification Language Docs: Update docs to describe how compound types are specified
  • Specification Language Release Notes: Add note about the addition of compound types
  • Storage docs: Add description on how compound types are mapped to HDF5

make "source" and "help" optional attributes of NWBContainer

recent changes to the inheritance structure means that objects (such as NWBFile) which previously did not have "source" and "help" attributes (because they were not particularly relevant) now have them required.

see, for example, NeurodataWithoutBorders/pynwb#101

in order to (a) minimize required attributes that may not be relevant for all containers and (b) facilitate the migration of earlier NWB files to the most recent version, I propose that we minimally make "source" and "help" optional attributes of NWB Container

Integrate generateFormatDocs with ReadTheDocs

Currently ReadTheDocs does not automatically generate the sources for the schema docs but we have to run make apidoc and check in the sources. This should be automated so that ReadTheDocs automatically generates the sources from the YAML specs directly.

Update Interface classes to use default_name instead of fixed name

[Hackathon] To support processing modules with multiple Interfaces of the same type, all Interfaces should be changes to use a default_name instead of a fixed name (i.e., move value from name to default_name key). As such, all Interfaces should also have a neurodata_type_def (this should already be the case).

Note Need to also add a note to the release notes of the format to describe this change.

Format Docs: Describe new reorganization of ElectrodeGroup

[Hackathon] The ElectrodeGroup should be restructured. Once this effort is complete, the release notes of the format need to be updated to describe those changes, i.e., the current description of the changes to ElectrodeGroup need to be updated accordingly.

modify timestamps specification to allow storing of different time formats

Users would like to store timestamps in both absolute and relative time. To achieve this, the following modification is proposed:

Proposal
Add an additional top-level field called timestamps_offset - the time difference by which timestamps are offset from being relative to the file's global starting time i.e. session_start_time. This value is 0 by default i.e. timestamps are directly relative to session_start_time. If this value is not 0, then to obtain timestamps that are directly relative to session_start_time, one must subtract this offset.

Advantages

  • This allows users to store timestamps in absolute POSIX time without having to set session_start_time to "00:00:00 1 January 1970" i.e. the unix epoch. Saving timestamps as absolute POSIX timestamps can be done by setting timestamps_offset to the POSIX timestamp equivalent of session_start_time. Saving timestamps as relative to session_start_time (the current specification) can be achieved by setting timestamps_offset to 0.

  • This also makes the specification easier to develop against, since this will explicitly specify the offset to obtain relative timestamps, eliminating the need for APIs to guess based on range.

Disadvantages

  • Users and APIs will need to be aware of the additional field.

Modifications

  • An alternative to this could be to allow each timestamps dataset to have it's own timestamps_offset field.

Documentation of storage of links

Originally reported by: Oliver Ruebel (Bitbucket: oruebel, GitHub: oruebel)


[Hackthon] Check/clarify in the documentation that only SoftLinks and ExternalLinks are allowed to be used in NWB-N and clarify why HardLinks are not allowed in the format (i.e., cannot be distinguished from regular dataset/group and, hence, unclear primary location of the data and inability to determine target of links)


Restructuring of electrode group

Originally reported by: Loren Frank (Bitbucket: lmfrnk, GitHub: Unknown)


[Hackathon]
Need Need the ability to reference arbitrary subsets of electrodes to allow processing of subsets of electrodes while being able to describe and access the data of the electrodes that were used.
Path Andrew/Oliver will create a proposal for restructuring electrode group to address this issue and send it out to decide on a specific solution.
Goal This item should be completed before SFN.

[Use case example: Loren Frank]
As it stands now, the electrode group does not have a list of individual electrode objects that make it up. It would be much easier to work with if one created an Electrode object for each electrode, grouped those together in an ElectrodeGroup where appropriate, and then saved a link / index / identifier for either the electrode or the electrodegroup in associated LFP or spikewaveform series.

Here's an example use case from our previously collected data: A four wire tetrode consists of four electrodes that are part of one ElectrodeGroup. From this four wire tetrode, one electrode is selected to record LFP data at a sampling rate of 1500 Hz, while spike snippets are saved at 30 KHz from all four channels. It would be helpful to have a single electrode object that could be associated with the LFP data and the same electrode object (in association with the other three electrodes of the tetrode) as part of an ElectrodeGroup that was associated with the spike waveforms.

As for the association, ideally things would be stored so the electrode / electrodegroup could be accessed directly from the data.


Incorrect github path link in README

In the README file the first line says that PyNWB has moved to github:

screen shot 2017-08-11 at 1 49 00 pm

This links back to the nwb-schema page (the same page the link is on). I assume this should be linking to the PyNWB github page.

I will fix this and submit a pull request.

Format docs: Sorting of types to sections broken in doc generation

Due to changes to the schema (specifically the renaming of Interface and Module and the fact that all types now inherit from NWBContainer) has broken the sorting of core format types into sections. The function that needs to be fixed is sort_type_hierarchy_to_sections in docs/utils/generator_format_docs.py

Format addition request

Hi everyone,

I would like to make quite a large number of suggested additions to the nwb data model.

Short story: we have developed a relational database system for keeping track of data in our lab. We have been using it for several months now and are happy with it. We based the schema on NWB 1.0 but there were many additions needed in order to capture all the data we need to store. While doing this we also had in mind the forthcoming IBL project, and I believe that these additions to the data model will cover most of what is needed there also.

The full data model is at:

http://alyx.readthedocs.io/en/latest/models.html

It's formatted as a Django specification.

All the best,
Kenneth.

Format Docs: Update name of Interface

Interface has been renamed to NWBContainer. Need to update the docs to reflect this change.

NOTES from #29

This change addresses the following issues:

  • Remove autogen fields: #14
  • Add default_name to Interface #16
  • Rename Module to ProcessingModule #17
  • Rename Interface to NWBContainer, and make it base class for all group data types #18

You can view, comment on, or merge this pull request online at:

#29
Commit Summary

  • update reformat script to omit autogen fields
  • removed autogen fields from core specs
  • updated reformat script to make NWBContainer the base class for all base types
  • changed Module include NWBContainer, not Interface
  • updated specs
  • renamed Module to ProcessingModule
  • updated specs to rename Module to ProcessingModule
  • removed ancestry from TimeSeries
  • added use of OrderedDict to get better diffs
  • added explicit define of Device spec
  • added help to ProcessingModule

File Changes

M bin/reformat_spec.py (128)
M core/nwb.base.yaml (177)
M core/nwb.behavior.yaml (36)
M core/nwb.ecephys.yaml (88)
M core/nwb.epoch.yaml (33)
M core/nwb.file.yaml (201)
M core/nwb.icephys.yaml (70)
M core/nwb.image.yaml (87)
M core/nwb.misc.yaml (50)
M core/nwb.ogen.yaml (8)
M core/nwb.ophys.yaml (106)
M core/nwb.retinotopy.yaml (136)

Patch Links:

https://github.com/NeurodataWithoutBorders/nwb-schema/pull/29.patch
https://github.com/NeurodataWithoutBorders/nwb-schema/pull/29.diff

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.