Broadly speaking, my research is in the field of cognitive neuroscience, in which the major task is to investigate the neural bases of (human) cognition. It includes at least two major directions ofresearch – neural representation and neural computation. Neural representation is how information is represented in our brain; neural computation is how the neural representation is formed and manipulated. I believe there are generic kinds of neural computation, called canonical neural computation, which can be applied to information processes indifferent modalities and cognitive domains. For my specific research interest I use speech and language as a model to investigate canonical neural computation underlying human cognition. Two kinds of canonical neural computation is of particular interest:
Transformation:
Variables or representation change over time and/or across brain functional regions. The focus is motor-to-sensory transformation and its functionality on(speech and language) behavior, which includes its putative computational function in the following research topics:
Integration:
Combining information across time, locations, or modalities, usually creates new representations. My research focus in this direction is on temporal integration(mostly with my collaborators) and multi-modal integration:
Temporal integration (collaborating with Nai Ding, Xiangbin Teng, and David Poeppel): Similar kinds of information are summarize over time (e.g., sounds, written words, and their following linguistic processes). One theory proposed that neural oscillations may serve as an integrative function during speech perception. How do the neural oscillations mediate the integrative process, as well as speech perception and language comprehension? (How do the proposed neural oscillations relate to memory 2013 curr bio and 2015 sci report?) [integration and process of before and after] e.g.(retrieval) integration and comparison, as well as later evaluation process(N400) ?
Multi-modal integration: Informationfrom different sources are combined together. The common kind of multi-modalintegration is multisensory integration (e.g., visual-auditory integration – looking at a person’s face while listening to speech). I am particularly interested in how motor and other cognitive functions integrate, and thefunction of such integration. This multi-modal integration differs from thecommon multisensory integration in that it does not require information coming from external sources. Instead, the information can be induced internally, suchas via motor-to-sensory transformation. For example,
1) how does the motor-to-sensory transformation integrate with external feedback for speech control? What would the computation be when they are (temporally, spectrally, or spatially) inconsistent?
2) How are the senses of agency and function of self-monitoring obtained by integrating motor-to-sensory transformation andsensory feedback?
3) Can motor-to-sensory transformation integrate with other information to create new (episodic and semantic) memory?
I am also interested in cross-disciplinary research, such as between neuroscience and computer science. By collaborating with faculty in computer science (ZhengZhang and Xipeng Qiu), we are aiming to build a bridge between artificial intelligence (AI) and human intelligence in the areas of speech and language. For example, we investigate the commonality between natural language processing and neural bases of language processing, especially in the aspect of semantics. Moreover, we would like to use recurrent neural network models to test neuroscience theories and models, such as lateralization and motor-to-sensory transformation.The purpose of this endeavor is to advance both fields – borrow methods andmodels from computer science to understand our brain better, while lending neuroscience findings, and models to make AI smarter and more human-like.
Tian, X., Ding, N., Teng, X., Bai F., &Poeppel, D. (in press). Imagined speech influences perceived loudness of sound. Nature Human Behavior.
Multiple postdoctoral positions are available to investigate the neural mechanisms of speech production and perception, with a focus of sensorimotor integration and its contribution to representation and computation in speech, language, memory and other higher order cognition. Information on applying (http://shanghai.nyu.edu/about/work/fellowships). Job inquires can be sent to xing.tian@nyu.edu
Students interested in cognitive neuroscience have two ways to join our lab, either via NYU Neuroscience Doctoral Program Shanghai Track or ECNU Brain and Cognitive Science Graduate Program Track. Detailed information can be found here (http://neuro.shanghai.nyu.edu/graduate). Prospective students are encouraged to contact me via xing.tian@nyu.edu
Our lab is open to motivated undergraduates interested in conducting independent research on human cognitive functions, including speech, language, memory and other higher order cognitive function. Please send your CV, description of your research interests, and a copy of unofficial transcripts to xing.tian@nyu.edu.
Click here to download the ZIP file that contains the main functions, a detailed tutorial, manual and sample data. The tutorial and manual can also be download separately: tutorial, manual.
TopoToolbox is an open-source software for topographic analysis on the event-related electrophysiological (EEG/MEG) data based on the method proposed by Tian and Huber (2008; 2011). TopoToolbox provides a tool for researchers to directly derive robust measures of response pattern (topographic) similarity and psychological meaningful response magnitude using electromagnetic signals in sensor space. These measures are useful for testing psychological theories without anatomical descriptions. Three functions are provided in this toolbox:
Angle test: testing topographic similarity between experimental conditions
Projection test: normalizing individual difference against a template to measure response magnitude
Angle dynamics test: assessing pattern similarity over time. This toolbox is developed by Dr. Xing Tian, Dr. David Poeppel and Dr. David E. Huber. It requires MATLAB (The Mathworks, Inc.) environments and supports various of standard data format imported from EEGLAB as well as user defined dataset. Please cite the following references if you use the TopoToolbox for publications or public releases:
Tian, X., & Huber, D. (2008). Measures of spatial similarity and response magnitude in MEG and scalp EEG. Brain Topography, 20(3), 131-141.
Tian, X., Poeppel, D., & Huber, D.E. (2011). TopoToolbox: Using sensor topography to calculate psychologically meaningful measures from event-related EEG/MEG. Computational Intelligence and Neuroscience, 2011. doi:10.1155/2011/674605
Should you have any questions or comments regarding this toolbox, please contact me at xing.tian@nyu.edu
Prospective students, postdoctoral researchers and collaborators are welcomed to contact us directly.
Xing Tian (田兴) E-mail: xing.tian@nyu.edu