Fundamentals of Neural Control
This is a graduate-level course. Students from engineering, physics, mathematics, statistics, and neuroscience are all welcome to attend, provided that the prerequisites are satisfied.
Description:
The use of Control theory to analyze and regulate biological neural systems is rapidly becoming an essential tool to design fast, accurate, and reliable brain-machine interfaces (BMIs) and artificial prostheses. This course introduces fundamental tools to model, estimate, and control the behavior of neural systems of increasing complexity (i.e., from single neurons to large cortical layers), with cutting-edge applications in the field of movement disorders, epilepsy, BMI, and rehabilitation. The first part of the course introduces biophysical-based and empirical models to describe the dynamics of neural systems, and studies the observability/controllability of these systems. The second part explores linear and nonlinear state estimation theory for neural systems with application to neural decoding and optimal state estimation. The third part focuses on designing state-based control algorithms that interact with the proposed models to achieve the assigned goals. The course is organized as a combination of formal lectures on relevant topics and case studies. As part of the workload, each students will be assigned a mid-term and a final project, which require the use of the proposed tools to solve open problems in the control of neural systems.
Course Objectives and Outcomes:
The objective of this course is to learn the basic tools for modeling neural time series (e.g., local field potentials, membrane voltage, and electrocorticography) based on physiological knowledge, and to use model-based control tools to design feedback strategies for the estimation and regulation of the neural activity of single neurons, small neural populations, or large volumes. Through a mix of lectures and hands-on experiences, the students will learn how to use advanced modeling tools and numerical methods to describe the temporal dynamics of neural signals, how to design optimal neural prostheses, and how to validate these systems in simulation.
Topics Covered:
Hodgkin-Huxley equations; Single-unit neural models; Mean-field neural models; Nonlinear Dynamics; Observability and controllability; Kalman filter; State-space-based neural decoding; Feedback control via electric fields; Motor learning and control; Optimization methods; Internal models of dynamics; Numerical analysis in MATLAB.
Prerequisite: Undergraduate-level knowledge of signal processing, bioelectromagnetism, and MATLAB (equivalent to CSE 1010, ECE 3101, and BME 3500) is required. Undergraduate-level knowledge of neural physiology (equivalent to BME 4300) is highly recommended.
Required, Elective, or Selected Elective: Elective.
Lectures: 1 lecture per week (3 hours)
Grading: Homework: 30%; Midterm Project: 30%; Final Project: 40%
A syllabus can be found here.
Textbook:
[TB] Steven J. Schiff (2012) Neural Control Engineering. ISBN: 978-0-2620-1537-0
Other Recommended References:
[R1] Eugene M. Izhikevich (2011) Dynamical Systems in Neuroscience. ISBN: 978-026-251-420-0
[R2] Reza Shadmehr, Sandro Mussa-Ivaldi (2012) Biological Learning and Control. ISBN: 978-0-2620-1696-4
Plan of Lectures and Assignments
Lecture | Topic | References/Reading | Assignment |
1 | Least-Square Estimation and Singular Value Decomposition | Lecture Notes. TB: Ch. 1 & 7 | |
2 | Kalman Filter and UKF | Lecture Notes. TB: Ch. 2 | Homework 1 |
3 | Models of Neurons | Lecture Notes. TB: Ch. 3-4. R1: Ch. 3 | |
4 | Equilibria, Limit Cycles, and Bifurcations | Lecture Notes. R1: Ch. 4 & 6 | Homework 2 |
5 | Parametric Estimation via Kalman filters | Lecture Notes. TB: Ch. 5. A1 | – |
6 | Mean-Field Models. Observability and Controllability | Lecture Notes. TB: Ch. 6. A2 | – |
7 | Computational Laboratory | Lecture Notes. TB: Ch. 6 | Homework 3 |
8 | MIDTERM PROJECT DISCUSSION | – | – |
9 | Applications of Neural Control | Lecture Notes. TB: Ch. 10-12 | – |
10 | Brain Machine Interfaces: Encoding Models | Lecture Notes. TB: Ch. 9. A3 | – |
11 | Brain Machine Interfaces: Bayesian Decoding | Lecture Notes. TB: Ch. 9. R2: Ch. 5. A4 & A5 | Homework 4 |
12 | Motor Learning, Costs, and Rewards | Lecture Notes. R2: Ch. 9-10 | – |
13 | Optimal Feedback Control | Lecture Notes. R2: Ch. 11-12 | – |
14 | FINAL PROJECT DISCUSSION | – | – |
MATERIAL DEVELOPED WITH THE SUPPORT OF THE US NATIONAL SCIENCE FOUNDATION: |
NSF-EECS AWARDS 1346888 AND 1518672 “EAGER: Modeling Network Dynamics in the Epileptic Brain to Develop Translational Tools for Seizure Localization and Detection” |