Craniux: A LabVIEW-based Modular Software Framework for Brain-Computer Interface Research
Brain-computer interface (BCI) systems require an essential suite of software capable of acquiring neural signals and converting them in real-time into commands for controlling external devices, such as a prosthetic hand or personal computer. BCI research often requires implementation of new signal processing techniques, neural decoding algorithms, and experimental paradigms. The Craniux BCI software framework was developed to address these needs.
Craniux is an open-source, open-access, real-time BCI framework, developed using LabView (National Instruments, Inc.) The software enables BCI researchers to develop and share new BCI modules, and take full advantage of many features inherent to the development environment. Craniux has been implemented with a number of extendable features, such as:
- Automated parallel processing for real-time neural signal acquisition and decoding, adaptive neural decoder training, real-time data visualization, and online experiment parameter control.
- Streaming and storage of raw neural data, intermediate processing data, and experimental parameters to disk for offline analysis.
- A distributed framework of BCI modules that can be spread across computers, connected by well-defined generic network communication protocols.
Craniux System Architecture
The Craniux system is comprised of separate modules, including engines for Acquisition, Signal Processing, and Application along with their associated user interfaces. In the illustration below, engines and user interface elements are spread across four network hosts: an Acquisition Host, a Signal Processing Host, an Application Host, and a User Interface Host.
Craniux system framework. The Craniux system is comprised of the acquisition, signal processing, and application engines, their associated GUIs, the system launcher, and the data saving manager. Engines and user interface elements are spread across four network hosts: the acquisition host, the signal processing host, the application host, and the user interface host, though the same computer may serve as multiple hosts. Network communication between system engines, as well as communication between engines and GUIs, is performed using the TCP/IP protocol. A block of neural data enters the system through the ACquisition engine, which sends preprocessed data to the Signal processing engine. The signal processing engine generates a control signal, which is then sent to the application engine. The application engine then communicates any relevant application-specific data (e.g., target information used for neural decoder training) back to the acquisition engine, which reads the next block of neural data. Bidirectional data transfer occurs between engine-specific GUIs and their associated engines, with system parameters transferred from the GUI to the engine and visualization data transferred from the engine to the GUI. Finally, the system launcher is responsible for loading the desired engines, tracking general experimental parameters, and experimental control.
Neural data can be acquired from a range of sources and devices. The Acquisition engine then sends preprocessed data to the Signal Processing engine. The Signal Processing engine uses standard or customized filters to generate a control signal, which is then sent to the Application engine for subject display or device control. Bidirectional data transfer occurs between engine-specific graphical user interfaces (GUIs) and their associated engines, with system parameters transferred from the GUI to the engine and visualization data transferred from the engine to the GUI.
Craniux system screenshot during population vector-based control. Left. Plot of the instantaneous activity of each feature used for cursor control along its preferred direction (blue) and the resultant population vector (red). Upper-middle. R-squared value plot indicating the distribution of values obtained during population vector training (blue) compared to the mean, 80th, 90th, and 95th percentile values obtained after training on 1000 iterations of target-shuffled data (red, dark orange, light orange, and yellow lines). The threshold above which features are chosen for use in the decoder is shown by the pink line. Lower-middle. R-squared values obtained during population vector training arranged by channel and frequency band. Note that the 70–120 Hz frequency band features show high R-squared values across all channels, consistent with the method used to generate the simulated ECoG signals. Right. The preferred direction distribution of all features. Red lines correspond to those features with R-squared values above the user-determined threshold, while white lines are those features falling below the threshold.