Automatic Video Based Spatial Co-Registration of Head Mounted Probes in Motion

Web Published:
12/7/2018
Description:

Automatic Video Based Spatial Co-Registration of Head Mounted Probes in Motion

 

Princeton Docket #19-3514

 

Brain activity monitoring by electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) is an important step in the diagnosis of epilepsy, sleep disorders, coma and more. The spatial co-registration of scalp channel positions is a major challenge to data acquisition by these methods.  Researchers at Princeton University’s Departments of Psychology and Computer Science have developed a quick and accurate method of scanning subject head shape that overcomes the shortcomings of conventional methods.

 

The novel video-based method bypasses the need for expensive 3D digitizers. It requires only a 10 to 20 second video of the subject’s head prior to the imaging session and allows probe locations to be fit precisely with extracted head positions, thereby avoiding approximation assumptions. The major breakthrough of this method is in the allowance of subject movements, which provides for exact measurements of subjects who are unable to remain still. The researchers have performed proof of concept experiments on 20 adult subjects and 10 infants and are preparing to design a user interface.

 

Applications       

•       Probe-based neuro-imaging

•       Neuro-imaging of subjects that are unable to remain still, e.g. infants, clinical patients, etc.

Advantages

•       Fast, cheap and reliable data acquisition

•       Not influenced by metallic objects

•       Works on moving subjects

 

Intellectual Property & Development Status

 

Patent protection is pending.

 

Princeton is currently seeking partners to fund the further development and commercialization of this opportunity.

 

Publications

 

Publication of methods and results is pending and is available under a confidentiality agreement.

 

The Inventors

Sagi Jaffe-Dax is a postdoctoral researcher in the laboratory of Lauren Emberson at Princeton University. He received a Ph.D. in Computational Neuroscience and a B.A. in Cognitive Sciences from The Hebrew University of Jerusalem.

 

Amit H. Bermano is a senior lecturer in the School of Computer Sciences at Tel Aviv University (equivalent to assistant professor in the US). He completed his postdoctoral research at the Princeton Graphics Group, hosted by Professor Szymon Rusinkiewicz and Professor Thomas Funkhouser. He received a Dr.Sc. in Computer Science from ETH Zurich in collaboration with Disney Research Zurich and both a M.Sc. and B.Sc. in Computer Science from the Israel Institute of Technology. 

 

Lauren L. Emberson is an assistant professor in the Department of Psychology at Princeton University. She performed her post-doctoral research in the laboratory of Dr. Richard Aslin at the University of Rochester in the Department of Brain and Cognitive Sciences. She received a Ph.D. in Psychology from Cornell University and a B.Sc. in Cognitive Systems from the University of British Columbia. She is a recipient of the James S. McDonnel Foundation Understanding Human Cognition Scholars Award, the Eric and Wendy Schmidt Transformative Technology Fund, and the Boyd McCandless Award among others.

 

Contact:

 

Laurie Tzodikov

Princeton University Office of Technology Licensing

(609) 258-7256 • tzodikov@princeton.edu

 

Catherine Ruesch

Princeton University Office of Technology Licensing

University Administrative Fellow

cruesch@princeton.edu 

 

 

 

 

 

Patent Information:
For Information, Contact:
Laurie Tzodikov
Licensing Associates
Princeton University
tzodikov@Princeton.EDU
Inventors:
Sagi Jaffe-Dax
Amit Bermano
Lauren Emberson
Keywords: