Publication
Using Reinforcement Learning to Provide Stable Brain-Machine Interface Control Despite Neural Input Reorganization
Downloadable Content
- Persistent URL
- Last modified
- 03/05/2025
- Type of Material
- Authors
-
-
Eric A. Pohlmeyer, University of MiamiBabak Mahmoudi, Emory UniversityShijia Geng, University of MiamiNoeline W. Prins, University of MiamiJustin C. Sanchez, University of Miami
- Language
- English
- Date
- 2014-01-30
- Publisher
- Public Library of Science
- Publication Version
- Copyright Statement
- © 2014 Pohlmeyer et al.
- License
- Final Published Version (URL)
- Title of Journal or Parent Work
- ISSN
- 1932-6203
- Volume
- 9
- Issue
- 1
- Start Page
- e87253
- End Page
- e87253
- Grant/Funding Information
- This work was funded under the Defense Advanced Research Projects Agency (DARPA, www.darpa.mil) Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR) project N66001-1O-C-2008.
- Supplemental Material (URL)
- Abstract
- Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder's neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled.
- Author Notes
- Keywords
- Research Categories
- Biology, Neuroscience
- Engineering, Biomedical
Tools
- Download Item
- Contact Us
-
Citation Management Tools
Relations
- In Collection:
Items
| Thumbnail | Title | File Description | Date Uploaded | Visibility | Actions |
|---|---|---|---|---|---|
|
|
Publication File - s5pxx.pdf | Primary Content | 2025-03-04 | Public | Download |