Glossary of Core NWB Tools

The glossary shown here provides a quick overview of the key software packages of the core NWB software stack. For a more general discussion of the overall organization of the core NWB software stack see the NWB Software Ecosystem page on the main NWB website.

Read/Write NWB File APIs

The NWB reference APIs provide full support for reading and writing all components of the NWB standard, including support for extensions. The APIs are interoperable, i.e., files created with PyNWB can be read in MatNWB and vice versa. Both PyNWB and MatNWB support advanced read/write for efficient interaction with very large data files (i.e., data too large for main memory), via lazy data loading, iterative data write, and data compression among others.

../_images/pynwb_logo_framed.png

PyNWB is the Python reference API for NWB. Docs Tutorials Source






../_images/matnwb_logo_framed.png

MatNWB is a MATLAB library for reading and writing NWB files. Docs Tutorials Source




Converting Data to NWB

../_images/nwbconversiontools_logo_framed.png

The NeuroConv is a Python library for automatic conversion from proprietary data formats to NWB. Docs Source




Validating NWB Files

NWB provides tools to check that files comply with the NWB standard schema as well as to check whether the data complies with NWB Best Practices. Validating compliance with the NWB schema ensures that files are structurally correct and can be read by NWB APIs. Validating compliance with best practices helps improve data quality and (re-)usability.

../_images/nwbinspector_logo_framed.png

NWB Inspector is a python library and command-line tool for inspecting NWB files for adherence to NWB best practices. By default, the Inspector also runs the PyNWB validator to check for compliance with the NWB schema. The Inspector can also be easily extended to integrate custom data checks and to configure checks. Docs Source

../_images/pynwb_logo_framed.png

The PyNWB reference Python API includes classes and command line tools for validating compliance of files with the core NWB schema and the schema of NWB Neurodata Extensions (NDX). Validation Docs

Hint

In practice, most user should use the NWB Inspector to validate NWB files, as it helps to check for compliance with both the schema and best practices and provides greater flexibility. Direct use of PyNWB’s validator is primarily useful for use case where schema compliance and performance are of primary concern, for example, during development of extensions or as part of automated test environments.



Extending NWB

Neurodata Extensions (NDX) are used to extend the NWB data standard, e.g., to integrate new data types with NWB or define standard for lab- or project-specific metadata. The collection of tools listed here are used to create, document, publish extensions. To learn more about how create extensions see the Extending NWB section.

../_images/ndxcatalog_logo_framed.png

The Neurodata Extensions Catalog (NDX Catalog) is a community-led catalog of Neurodata Extensions (NDX) to the NWB data standard. The NDX Catalog provides a central portal to search, publish, and review of NDX. Catalog Source


../_images/ndxtemplate_logo_framed.png

The NDX Template provides a template for creating Neurodata Extensions (NDX) for the NWB data standard. When creating a new extension, the NDX-template will create a detailed NEXTSTEPS.md file describing how to create an extension and how to submit it to the NDX catalog. Source


../_images/publishing_ndx_logo_framed.png

The staged-extensions GitHub repository is used to register new extensions for publication in the Neurodata Extensions Catalog (NDX Catalog). Source




../_images/documenting_ndx_logo_framed.png

The HDMF Documentation Utilities (hdmf-docuils) provide utility tools for creating documentation for extension schema defined using the NWB Schema Language. The NDX Template automatically sets up the documentation for extensions via the hdmf-docuils and as such are part of most NDX code repositories without having to interact with the tool directly. Source

../_images/nwbschema_logo_framed.png

The NWB data standard is governed by the NWB Format Specification (a.k.a., the NWB Schema). When creating new extensions we typically build on and reuse existing neurodata_types already available in NWB. The NWB Format Specification provides a reference definition for all types available in NWB. The NWB schema itself builds on the HDMF Common Schema. Docs Source

../_images/hdmf_common_schema_logo_framed.png

The HDMF Common Schema defines the schema of common, general data structures, which are used throughout the NWB Standard Schema but which are not specific to neurophysiology. Example types defined in the HDMF common schema incude, e.g., all types related to DynamicTable for defining data tables. Docs Source



Core Development

Understanding core development tools (e.g., HDMF) is useful for developers in particular when we need to dive deeper into the core data infrastructure for NWB, e.g., when changing or creating new storage methods or when developing features for common data types (e.g., DynamicTable) that are defined in HDMF and used in NWB.

../_images/hdmf_logo_framed.png

The Hierarchical Data Modeling Framework (HDMF) is a python package for working with hierarchical data. It provides APIs for specifying data models, reading and writing data to different storage backends, and representing data with Python object. HDMF builds the foundation for the PyNWB Python API for NWB. Docs Source