- Degree Overview
- International: Sun Yat-Sen
- Special Programs
- Student Resources
- Get Involved
- Professional & Career Resources
- New Students
- Degree Overview
- PhD & TA Graduate Students
- Student Resources
- Professional & Career Resources
- Get Involved
- Financial Support
- Research Areas
- Centers & Facilities
- Undergraduate Research
Imaging & Visualization Research Group
We conduct theoretical and applied research in the areas of Computer Vision, Computational Biology and Neuroscience, Medical Image Computing, Machine Learning and Imaging for forensics. Our goal is to develop novel, automated and user-guided computational methods that can provide robustness, accuracy and computational efficiency in the analysis of visual data.
We work toward solutions to existing problems, as well as explore different scientific disciplines, where our research can contribute with useful interpretation, quantification and modeling.
Medical Images │ GIS │ Digital Forensic │ Transportation
VITAL (Visual Information Translation Analysis & Learning): Our focus is on handling data uncertainties in classification, modeling and prediction from image data. Our hypothesis is that application of mathematical/computational methods can especially help with ambiguities in the data, outliers and incomplete data, and can ultimately help create new hypotheses and directions in different domains. While keeping our core theoretical background in Computer Vision and Pattern Recognition, the application domains of our interest are robotics, (pre-clinical) computational neuroscience and physiology, as well as (clinical) biomedical imaging.
Modeling the structure and dynamics of neuronal circuits at single neuron resolution: The overreaching hypothesis of this research is that the brain is a highly adaptive system defined by specific structure and dynamics, as a whole and at the single cell level. Although this is a fundamental hypothesis, it has been difficult to test using live animals, quantitatively through numerical modeling, or even qualitatively through observation. By bringing together leading-edge imaging technologies and computationally intensive image analytics, we initiated the pursuit of what makes the brain function as a whole throughout life and continuously adapt to various changes such as aging, disease, drug treatment, and injury. We aim at explaining how synaptic connectivity is established in vivo, an important question in Neuroscience today. In this direction, our immediate goal is the spatiotemporal reconstruction of the larval Drosophila Central Nervous System at single-cell resolution, from specialized imagery. The uniqueness of our approach, and the main difference from existing efforts, is the bottom-up reconstruction of neuronal circuits and their dynamics, from single neuron modeling, to graph-based annotation of connectivity maps.
Discovering and quantifying protein interaction networks within single neurons: A key to vitalizing the knowledge of proteome is systematic methods that link individual protein interactions to specific cellular outputs. Our team participates in a cross-disciplinary effort towards describing social sensor systems that directly quantify and visualize interactions between proteins within living animals. Our work so far suggests that restricted protein interactions play the decisive role for developmental processes. At a more global level, we aim at mapping protein interaction networks, and associating them with the structural development of neuronal morphologies.
Visual Analytics of Neuroimaging Data: This is a collaborative project with Indiana University’s Department of Radiology and Imaging Sciences. We aim to develop human brain image analysis and visualization techniques and tools for the visual exploration and analysis of human brain image data. Pattern recognition techniques are applied to detect imaging biomarkers for various conditions. Visualization techniques are developed to visualize the brain connectome network’s topology, attributes, clusters, markers, genetic associations, and their correlations, within the context of volumetric anatomical features. Visual analytics techniques are also being developed for the analysis tasks such as diagnostic biomarker detection. This project is currently funded by NIH-NIBIB and by IUPUI’s Imaging Technology Development Program (ITDP).
Health Care Data Visualization: This is a collaboration with researchers in the Regenstrief Institute and the School of Informatics to develop new visualization techniques and an interactive visualization system for large healthcare data sets. Such a system offers a real time and web-based solution for the effective use of large scale electronic health record systems by allowing system level integration of the human´s visual capabilities into the overall health data based decision making system. We developed a novel concept space approach to compress large, heterogeneous, and historical patient and public health data into a single, intuitive and comprehensive visualization. New spatiotemporal visualization techniques were developed for large public health datasets that involve geographical and population wide information. This project has been funded by the US Department of Defense (US Army).
Information Visualization Algorithms: We are interested in developing various general purpose information visualization algorithms. Some examples include (1) Gene Terrain, a large scale graph visualization technique based on scattered data interpolation; (2) Spiral Theme Plot, a time-series data visualization technique; and (3) Color Time Curves, a spatiotemporal data visualization technique. These techniques have been applied to various data analytics and visualization applications such as disease biomarker detection using disease networks and protein-protein interaction network; healthcare data visualization, city traffic data visualization, and text visualization for online review data and unstructured text data.
3D Facial Image Analysis for FAS diagnosis: This was a collaboration with NIH Collaborative Initiative on Fetal Alcohol Spectrum Disorders (CIFASD). We have developed 3D image analysis techniques for Fetal Alcohol Syndrome diagnosis. The focus is on enhancing our understanding of FASD dysmorphology through the processing and analysis of 3D facial images. We have also developed mouse models for facial and brain phenotypes as a function of the dose and stage of embryonic development of the alcohol exposure. New applications of 3D Micro-video-imaging and Micro-computed tomography (Micro-CT) imaging of facial and underlying bone/cartilage allow high resolution analysis of surface-to-bone/cartilage craniofacial dysmorphology from fetal ages to young adulthood. This project was funded by several NIH grants.
Volume Graphics: This research focused on volume rendering algorithms and volume graphics techniques for interactive volumetric modeling systems. I have developed several algorithms for deformable volume rendering and transfer function design in volume visualization. I have also developed a framework of hardware assisted techniques and voxelization algorithms that would allow 3D modeling operations to be carried out interactively in a volume graphics environment. Our results demonstrate that it is possible to achieve high performance volume graphics and volume modeling with the architecture of existing graphics subsystems without any special hardware design. This project was funded by an NSF grant.
Intelligent Vehicles: This research on autonomous and safety driving develops computer vision algorithms for guiding intelligent vehicles in the road without collision. It uses image, video and sensing technologies to achieve the task of safety driving. This includes a variety of functions such as the detection of road edges, lane marks, pedestrians, bicyclists, and the time-to-collision with other vechicles with vehicle borne cameras. The results predict potential dangers and improving driving safety. We develop many motion based approaches to effectively achieve these goals using pattern recognition and data mining based on large scale naturalistic driving video taken in different weather and illulmination conditions. we also use a data driven approach to investigate knowledge of the driving environment to enhance AI functions of vehicle sensors and driving video to high dimensional feature space followed with machine learning methods. We area also interested in building a driving interface that makes drivers aware of surrounding traffic, monitoring traffic using network cameras in road infrastructure, large visual survey of road environments, and driving information sharing via V2V and V2I communication.