Open dataset: study of multichannel sensorimotor cortical electrophysiology in primates

Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology. 1
Public data set URL: https://zenodo.org/records/3854034

Directory

  • General Description
  • Possible uses
  • Variable names
  • Decoder Results
  • Videos
  • Supplements
  • Contact Information
  • Citation

General Description

The data set includes:

  • Thresholding of extracellular and simultaneously recorded peaks across time, sorted into units (up to 5, including one “hash” unit), and sorted waveform segments;
  • The x,y position of the touching fingertip and the x,y position of the touching target (both sampled at 250 Hz).

The behavioral task is to self-pace to a target arranged in a grid (e.g. 8×8) without intervals or delay intervals before movement. One monkey extended its right arm (recording from the left hemisphere), and the other monkey extended its left arm (recorded from the right hemisphere). In some sessions recording is from M1 and S1 arrays (192 channels); in most sessions M1 is recorded alone (96 channels).

Data from two primate subjects include: 37 sessions from monkey 1 (“Indy”, spanning approximately 10 months) and 10 sessions from monkey 2 (“Loco”, spanning approximately 1 month), respectively Reached approximately 20,000 times with Monkey 1 and 2 with 6,500 times.

Possible uses

These data are ideal for training BCI decoders, especially since they are not segmented into trials. We anticipate that this dataset will be valuable to researchers wishing to design improved models of electrical stimulation of sensorimotor cortex or provide an equivalent basis for comparing different BCI decoders. Other uses may include statistical analysis of arm kinematics, spike-to-noise correlation or signal correlation, or for exploring the stability or variability of extracellular recordings between sessions.

Variable names

Each file contains data in the following format. In the following, n represents the number of recording channels, u represents the number of sorting units, and k represents the number of samples.

  • chan_names – n x 1
    • Cell array of channel identifier strings, such as “M1001”.
  • cursor_pos – k x 2
    • The position of the cursor in the Cartesian coordinate system (x, y), mm.
  • finger_pos – k x 3 or k x 6
    • The position of the working fingertip in the Cartesian coordinate system (z, -x, -y) is reported by the manual tracker in centimeters. Therefore, the cursor position is an affine transformation of the fingertip position, using the following matrix:

      (

      0

      0

      ?

      10

      0

      0

      ?

      10

      )

      \begin{pmatrix} 0 & amp; 0 \ -10 & amp; 0 \ 0 & amp; -10 \end{pmatrix}\

      ?0?100?00?10?
      ?
      Note that for some sessions, finger_pos also includes the orientation of the sensor; therefore, the full state is: (z,-x,-y, azimuth, elevation, roll).

  • target_pos – k x 2
    • The position of the target in the Cartesian coordinate system (x, y), mm.
  • t – k x 1
    • Timestamp, in seconds, for each example of cursor_pos, finger_pos, and target_pos.
  • spikes – n x u
    • Cell array of spike event vectors. Each element in the cell array is a vector of peak event timestamps in seconds. The first unit (u1) is an “unsorted” unit, which means it contains threshold crossings that remain after the peaks on that channel have been sorted into other units (u2, u3, etc.). For some sessions peaks were sorted into as many as 2 units (i.e. u = 3); for other sessions, 4 units (u = 5).
  • wf – n x u
    • An array of cells for “slices” of the spike event waveform. Each element in the cell array is a matrix of spike event waveforms. Each waveform corresponds to a “spike” timestamp. Waveform samples are in microvolts.

Decoder Results

These data were used to fit the decoder model as reported by Makin et al. [1]. To aid comparison with other decoders, we include performance summaries (for each session, decoder, bin-width, etc.) in the file refh_result. Csv, containing the following columns:

  • session – identifier, for example “indy_20160407_02”
  • monkey – one of “indy” or “loco”
  • num_neurons – total number of features used in the decoder
  • num_training_samples – Number of samples (in specified capacity width) to use to train the decoder (in order from the beginning of the file)
  • num_testing_samples – Number of samples used to evaluate the decoder (in order, until end of file)
  • kinematic_axis – one of “posx”, “posy”, “velx”, “vely”, “accx” or “accy”
  • bin_width – one of “16”, “32”, “64” or “128”
  • decoder – one of “regression”, “KF_observed”, “KF_static”, “KF_dynamic”, “UKF”, “rEFH_static” or “rEFH_dynamic”
  • rsq – coefficient of determination, R2
  • snr – signal-to-noise ratio, SNR = -10 log10(1-R2)

Videos

In some sessions, we used a dedicated hardware video capturer to record on-screen playback of stimulus presentation displays. The videos are therefore a faithful representation of the stimulation and feedback the monkeys receive and can be used in the following sessions:

  • indy_20160921_01
  • indy_20160930_02
  • indy_20160930_05
  • indy_20161005_06
  • indy_20161006_02
  • indy_20161007_02
  • indy_20161011_03
  • indy_20161013_03
  • indy_20161014_04
  • indy_20161017_02

Supplements

Raw broadband neural recordings extracted from this dataset are available for the following sessions:

  • indy_20160622_01: doi:10.5281/zenodo.1488440
  • indy_20160624_03: doi:10.5281/zenodo.1486147
  • indy_20160627_01: doi:10.5281/zenodo.1484824
  • indy_20160630_01: doi:10.5281/zenodo.1473703
  • indy_20160915_01: doi:10.5281/zenodo.1467953
  • indy_20160916_01: doi:10.5281/zenodo.1467050
  • indy_20160921_01: doi:10.5281/zenodo.1451793
  • indy_20160927_04: doi:10.5281/zenodo.1433942
  • indy_20160927_06: doi:10.5281/zenodo.1432818
  • indy_20160930_02: doi:10.5281/zenodo.1421880
  • indy_20160930_05: doi:10.5281/zenodo.1421310
  • indy_20161005_06: doi:10.5281/zenodo.1419774
  • indy_20161006_02: doi:10.5281/zenodo.1419172
  • indy_20161007_02: doi:10.5281/zenodo.1413592
  • indy_20161011_03: doi:10.5281/zenodo.1412635
  • indy_20161013_03: doi:10.5281/zenodo.1412094
  • indy_20161014_04: doi:10.5281/zenodo.1411978
  • indy_20161017_02: doi:10.5281/zenodo.1411882
  • indy_20161024_03: doi:10.5281/zenodo.1411474
  • indy_20161025_04: doi:10.5281/zenodo.1410423
  • indy_20161026_03: doi:10.5281/zenodo.1321264
  • indy_20161027_03: doi:10.5281/zenodo.1321256
  • indy_20161206_02: doi:10.5281/zenodo.1303720
  • indy_20161207_02: doi:10.5281/zenodo.1302866
  • indy_20161212_02: doi:10.5281/zenodo.1302832
  • indy_20161220_02: doi:10.5281/zenodo.1301045
  • indy_20170123_02: doi:10.5281/zenodo.1167965
  • indy_20170124_01: doi:10.5281/zenodo.1163026
  • indy_20170127_03: doi:10.5281/zenodo.1161225
  • indy_20170131_02: doi:10.5281/zenodo.854733

Contact Information

We would be happy to hear from you if you find this dataset valuable. Corresponding author: J. E. O’Doherty [email protected].

Citation

@misc{ODoherty:2017,
author = {O'{D}oherty, Joseph E. and Cardoso, Mariana M. B. and Makin, Joseph G. and Sabes, Philip N.},
title = {Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex electrophysiology},
doi = {10.5281/zenodo.788569},
url = {https://doi.org/10.5281/zenodo.788569},
month = may,
year = {2017}
}

  1. Makin, J. G., O’Doherty, J. E., Cardoso, M. M. B. & Sabes, P. N. (2018). Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm. J Neural Eng. 15(2): 026010. doi:10.1088/1741-2552/aa9e95