Glossary of Core NWB Tools

The glossary shown here provides a quick overview of the key software packages of the core NWB software stack. For a more general discussion of the overall organization of the core NWB software stack see the NWB Software Ecosystem page on the main NWB website.

Read/Write NWB File APIs

The NWB reference APIs provide full support for reading and writing all components of the NWB standard, including support for extensions. The APIs are interoperable, i.e., files created with PyNWB can be read in MatNWB and vice versa. Both PyNWB and MatNWB support advanced read/write for efficient interaction with very large data files (i.e., data too large for main memory), via lazy data loading, iterative data write, and data compression among others.


PyNWB is the Python reference API for NWB. Docs Tutorials Source


MatNWB is a MATLAB library for reading and writing NWB files. Docs Tutorials Source

Converting Data to NWB


The NeuroConv is a Python library for automatic conversion from proprietary data formats to NWB. Docs Source


The NWB GUIDE is a desktop app that provides a no-code Graphical User Interface for Data Entry for converting neurophysiology data to NWB using NeuroConv. Note: NWB GUIDE is in pre-release status and under active development. Source

Validating NWB Files

NWB provides tools to check that files comply with the NWB standard schema as well as to check whether the data complies with NWB Best Practices. Validating compliance with the NWB schema ensures that files are structurally correct and can be read by NWB APIs. Validating compliance with best practices helps improve data quality and (re-)usability.


NWB Inspector is a python library and command-line tool for inspecting NWB files for adherence to NWB best practices. By default, the Inspector also runs the PyNWB validator to check for compliance with the NWB schema. The Inspector can also be easily extended to integrate custom data checks and to configure checks. Docs Source


The PyNWB reference Python API includes classes and command line tools for validating compliance of files with the core NWB schema and the schema of NWB Neurodata Extensions (NDX). Validation Docs


In practice, most user should use the NWB Inspector to validate NWB files, as it helps to check for compliance with both the schema and best practices and provides greater flexibility. Direct use of PyNWB’s validator is primarily useful for use case where schema compliance and performance are of primary concern, for example, during development of extensions or as part of automated test environments.

Extending NWB

Neurodata Extensions (NDX) are used to extend the NWB data standard, e.g., to integrate new data types with NWB or define standard for lab- or project-specific metadata. The collection of tools listed here are used to create, document, publish extensions. To learn more about how create extensions see the Extending NWB section.


The Neurodata Extensions Catalog (NDX Catalog) is a community-led catalog of Neurodata Extensions (NDX) to the NWB data standard. The NDX Catalog provides a central portal to search, publish, and review of NDX. Catalog Source


The NDX Template provides a template for creating Neurodata Extensions (NDX) for the NWB data standard. When creating a new extension, the NDX-template will create a detailed file describing how to create an extension and how to submit it to the NDX catalog. Source


The staged-extensions GitHub repository is used to register new extensions for publication in the Neurodata Extensions Catalog (NDX Catalog). Source


The HDMF Documentation Utilities (hdmf-docuils) provide utility tools for creating documentation for extension schema defined using the NWB Schema Language. The NDX Template automatically sets up the documentation for extensions via the hdmf-docuils and as such are part of most NDX code repositories without having to interact with the tool directly. Source


The HDMF Specification Language defines formal structures for describing the organization of complex data using basic concepts, e.g., Groups, Datasets, Attributes, and Links. The HDMF specification language is defined by the Hierarchical Data Modeling Framework (HDMF). The NWB Specification Language then is a derivative of the HDMF Specification Language with minor modifications for NWB (e.g., to use the term neurodata_type). Source Docs.


The NWB data standard is governed by the NWB Format Specification (a.k.a., the NWB Schema) described using the NWB Specification Language. When creating new extensions we typically build on and reuse existing neurodata_types already available in NWB. The NWB Format Specification provides a reference definition for all types available in NWB. The NWB schema itself builds on the HDMF Common Schema. Docs Source


The HDMF Common Schema defines the schema of common, general data structures, which are used throughout the NWB Standard Schema but which are not specific to neurophysiology. Example types defined in the HDMF common schema incude, e.g., all types related to DynamicTable for defining data tables. Docs Source

Core Development

Understanding core development tools (e.g., HDMF) is useful for developers in particular when we need to dive deeper into the core data infrastructure for NWB, e.g., when changing or creating new storage methods or when developing features for common data types (e.g., DynamicTable) that are defined in HDMF and used in NWB.


The Hierarchical Data Modeling Framework (HDMF) is a python package for working with hierarchical data. It provides APIs for specifying data models, reading and writing data to different storage backends, and representing data with Python object. HDMF builds the foundation for the PyNWB Python API for NWB. Docs Source


The HDMF Zarr (HDMF-Z) library implements a Zarr backend for HDMF. HDMF-Z also provides convenience classes for integrating Zarr with the PyNWB Python API for NWB to support writing of NWB files to Zarr. Docs Source