Broadly speaking, my research is in the field of cognitive neuroscience, in which the major task is to investigate the neural bases of (human) cognition. It includes at least two major directions ofresearch – neural representation and neural computation. Neural representation is how information is represented in our brain; neural computation is how the neural representation is formed and manipulated. I believe there are generic kinds of neural computation, called canonical neural computation, which can be applied to information processes indifferent modalities and cognitive domains. For my specific research interest I use speech and language as a model to investigate canonical neural computation underlying human cognition. Two kinds of canonical neural computation is of particular interest:
Variables or representation change over time and/or across brain functional regions. The focus is motor-to-sensory transformation and its functionality on(speech and language) behavior, which includes its putative computational function in the following research topics:
- Speech production and control: The high computational demand of speech (e.g., producing 4-8 syllable per second) needs an efficient way to provide control. I have proposed a motor-to-sensory transformation in a sequential manner for speech control, and a series of studies have been carried out to test the proposed model. Many aspects and questions still need to be addressed. For example, whatare the computational roles of motor-to-sensory transformation in speech control? And what are the neural mechanisms and implementation?
- Perception: One theory that links motor and perception is the motor theory of speech perception and language comprehension. Numerous studies have observed activity in motorregions while listening to speech or reading words (verbs) and sentences. Butthe function of motor activation during perception is still debated. Canmotor-to-sensory transformation offer a mechanistic account for these data and explain why motor regions are engaged during perception? Ultimately, can motor-to-sensory transformation address the invariance problem in speechperception?
- Learning: Second language (L2) and bilingualism are our focus. Learning, especially for speech essentially creates a perception-action mapping – which requires the reproduction of sound patterns with certain meanings. So, do L2 learningrecruit motor-to-sensory transformation? How do the representational changes between languages engage the motor-to-sensory transformation in bilinguals?
- Memory, mental imagery,and other higher order cognitive function: Bothmotor-to-sensory transformation and memory retrieval can induce internal neuralrepresentation without external stimulation (fig. 2). How do they differ? Whatare their functional specificity, especially in speech and language? Moreover, how do the internally induced neural representation link to our subjective feeling of imagination?
- For special populations: Can motor-to-sensory transformation be the underlying mechanism for the positive symptoms in special populations, such as those with speech and language deficits (e.g., stuttering, dyslexia), and those with mental and neural disorders (e.g., auditory hallucination in schizophrenia, also see point 6 below)?
- Agency and self-monitoring: How can we have body ownership (knowing our body is ours and under our control, incontrast to cases like rubber hand illusion and phantom limb sensation), and identify incoming stimulation as generated by ourselves (e.g. speech feedback is produced by me, in contrast to cases like auditory hallucination)? Can motor-sensory transformation in combination with perception provide us agency andself-monitoring function? (See section II. Integration for more details.)
Combining information across time, locations, or modalities, usually creates new representations. My research focus in this direction is on temporal integration(mostly with my collaborators) and multi-modal integration:
Temporal integration (collaborating with Nai Ding, Xiangbin Teng, and David Poeppel):
Similar kinds of information are summarize over time (e.g., sounds, written words, and their following linguistic processes). One theory proposed that neural oscillations may serve as an integrative function during speech perception. How do the neural oscillations mediate the integrative process, as well as speech perception and language comprehension? (How do the proposed neural oscillations relate to memory 2013 curr bio and 2015 sci report?) [integration and process of before and after] e.g.(retrieval) integration and comparison, as well as later evaluation process(N400) ?
Multi-modal integration: Informationfrom different sources are combined together. The common kind of multi-modalintegration is multisensory integration (e.g., visual-auditory integration – looking at a person’s face while listening to speech). I am particularly interested in how motor and other cognitive functions integrate, and thefunction of such integration. This multi-modal integration differs from thecommon multisensory integration in that it does not require information coming from external sources. Instead, the information can be induced internally, suchas via motor-to-sensory transformation. For example,
1) how does the motor-to-sensory transformation integrate with external feedback for speech control? What would the computation be when they are (temporally, spectrally, or spatially) inconsistent?
2) How are the senses of agency and function of self-monitoring obtained by integrating motor-to-sensory transformation andsensory feedback?
3) Can motor-to-sensory transformation integrate with other information to create new (episodic and semantic) memory?
I am also interested in cross-disciplinary research, such as between neuroscience and computer science. By collaborating with faculty in computer science (ZhengZhang and Xipeng Qiu), we are aiming to build a bridge between artificial intelligence (AI) and human intelligence in the areas of speech and language. For example, we investigate the commonality between natural language processing and neural bases of language processing, especially in the aspect of semantics. Moreover, we would like to use recurrent neural network models to test neuroscience theories and models, such as lateralization and motor-to-sensory transformation.The purpose of this endeavor is to advance both fields – borrow methods andmodels from computer science to understand our brain better, while lending neuroscience findings, and models to make AI smarter and more human-like.
Click here to download the ZIP file that contains the main functions, a detailed tutorial, manual and sample data. The tutorial and manual can also be download separately: tutorial, manual.
TopoToolbox is an open-source software for topographic analysis on the event-related electrophysiological (EEG/MEG) data based on the method proposed by Tian and Huber (2008; 2011). TopoToolbox provides a tool for researchers to directly derive robust measures of response pattern (topographic) similarity and psychological meaningful response magnitude using electromagnetic signals in sensor space. These measures are useful for testing psychological theories without anatomical descriptions.
Three functions are provided in this toolbox:
Angle test: testing topographic similarity between experimental conditions
Projection test: normalizing individual difference against a template to measure response magnitude
Angle dynamics test: assessing pattern similarity over time.
This toolbox is developed by Dr. Xing Tian, Dr. David Poeppel and Dr. David E. Huber. It requires MATLAB (The Mathworks, Inc.) environments and supports various of standard data format imported from EEGLAB as well as user defined dataset. Please cite the following references if you use the TopoToolbox for publications or public releases:
Tian, X., & Huber, D. (2008). Measures of spatial similarity and response magnitude in MEG and scalp EEG. Brain Topography, 20(3), 131-141.
Tian, X., Poeppel, D., & Huber, D.E. (2011). TopoToolbox: Using sensor topography to calculate psychologically meaningful measures from event-related EEG/MEG. Computational Intelligence and Neuroscience, 2011. doi:10.1155/2011/674605
Should you have any questions or comments regarding this toolbox, please contact me at email@example.com