Voxel-based volumetric objects normally have much more complex physical properties and spatial structures than surface-based geometric models. Thus, traditional physically-based deformation modeling techniques are often ineffective and costly in the volumetric domain. This project focuses on developing efficient and accurate modeling techniques, computational methods and visualization algorithms for deformable volumetric objects. A new landmark-based volume deformation model and related rendering algorithms are developed. In this model, a volume deformation is represented by a relatively small set of landmark points defined in a volume space. Three-dimensional scattered data interpolation methods will be applied to generate smooth deformation functions over the volume space. Landmark points and their movements can be used for both the interactive deformation manipulations and the deformation pattern representations of specific types of deformable objects. Instead of directly modeling the object's physical or biological properties, this approach employs an inverse method to derive the deformation pattern of a given class of objects, using a parameterized spring model, directly from existing or experimental data. Fast deformable volume rendering algorithms are developed to support the visualization of the volume deformation process. This deformation model have been used for computer assisted growth and morphological study in craniofacial surgical planning, in collaboration with Prof. Joan Richtsmeier from The Johns Hopkins University Medical School. This project is currently supported by the National Science Foundation.
Some Images and animation clips
This is a multi-disciplinary effort, in collaboration with several biomedical researchers in Indiana University Medical Center. Our goal is to develop innovative 3D techniques and applications to efficiently process and analyze biomedical data collected from clinical and biological structures, and to combine the data with computer models using computer graphics and simulation techniques for biomedical research, and medical diagnosis and treatment. My current collaborators include Dr. Mihran Tuceryan at the department of Computer and Information Science, Prof. Ken Dunn and Prof. Robert Bacallao from the Department of Medicine, Prof. Mark Lowe from the Department of Radiology and Prof. Mostafa Analoui from School of Dentistry.
A project in 3D microscopy visualization has been pursued for a few years, and funded by a major research grant from Indiana University Strategic Directions Initiative (SDI). In this effort, we employ an integrated approach that combines volume visualization, 3D image processing and confocal microscopy in one tightly integrated system to provide microscopy visualization and imaging solutions that cannot be offered by any one individual technology. New algorithms and tools are being developed for not only visualization, but also quantification and analysis. A visualization software system, call IVIE, has been developed and installed in Department of Medicine labs. The system is currently in the beta-testing phase, and will be released to the public soon. Several other biomedical application projects are also being developed as part of this effort. They include craniofacial surgery planning and simulation; interactive modeling and simulation for oral and maxillofacial imaging; and 3D structural analysis of hard tissue using microCT.
For more information on the 3D microscopy project, visit the project's web page
IVIE: a prototype software system for Interactive Visualization and Imaging.
Some more sample images are available HERE
Traditional computer graphics systems use the boundary surfaces of 3D objects for representations and rendering. Volume graphics employs the object's interior space cells as the basic representation and rendering unit, and is considered, by many researchers, the next generation computer graphics paradigm. A clear characteristic of our research, compared to other places, is the emphasis on interactive modeling requirement. The goal is to provide volume graphics support to interactive volumetric modeling systems. We have recently developed several new and very encouraging techniques and algorithms that would allow 3D modeling operations to be carried out interactively in a volume graphics environment. A key component is the hardware accelerated voxelization of geometric and volumetric models. Our results demonstrate that it is possible to achieve high performance volume graphics and volume modeling with the architecture of existing graphics subsystems without any special hardware design.
These voxelization algorithms can be further integrated into a more systematic volume fusion system that provides a uniform framework for the interactive modeling and rendering of volumetric scenes. In this approach, a volumetric scene is defined by a scene expression that constructs volume information by combining multiple 3D objects of heterogeneous representations using various blending and filtering functions. The process of designing and evaluating a volumetric scene expression is called volume fusion. The designing process involves the selection of appropriate blending and filtering functions to represent desired computational tasks of different applications. A space-sweep approach is employed to provide a general computational framework that computes volume information of the scene expression on a 2D slice which moves across the volume space in regular increments. We have applied this framework to several specific problems, including volumetric CSG modeling, binary CSG voxelization, and volumetric collision detection. A major advantage of this approach is that most of the computation can be implemented by hardware features in today's 3D graphics systems, leading to interactive speed for many previously very expensive applications.
Another major application we just started to pursue is geospatial data visualization using volume fusion, in collaboration with the Center for Earth and Environmental Science (CEES) at IUPUI. We wish to develop volume fusion techniques to achieve multi-scale volume rendering, geospatial data fusion and scalability, and to provide a new mechanism with the ability to produce and distribute multidimensional/multiscale geospatial visualizations in a fast, accurate, and inclusive manner. In addition to graphical representations, the technology will enhance the ability for extracting tabular databases of space/time and statistical dimension measurements, allowing accurate and efficient input for analysis of multidimensional geospatial datasets.
Some sample voxelization results
This project aims to investigate a new way of visualizing large scientific datasets using high performance networks and a data-distributed visualization paradigm. Large dataset visualization is considered as one of the most challenging problems in visualization today. The emergence of high performance networks, such as the Abilene network and Internet2, offers potentially very powerful solutions and new insight to this problem and related applications. The main objective of this project is to develop new visualization algorithms and applications that dynamically and efficiently access and process different pieces of the required data over high speed networks. This is done through either remote disk access or direct data stream connections using data servers. Since local copies of the datasets are no longer needed, scalability can be more easily achieved by demand-driven data retrieval and efficient dataset decomposition. New visualization algorithms will be developed to accommodate the required data retrieval paradigm. The success of this project will not only lead to new platform independent visualization applications that are less depend on local memory resources, but also significantly reduce the unnecessary data redundancy generated by the large number of local copies of popular datasets. The high level of network transparency and data sharing also lead to an ideal environment for collaborative applications. This project is currently funded by the Indiana University High Performance Network Applications Program.
Another paradigm we are also investigating is to use volume fusion for tele-visualization and for faster and more efficient data sharing and dissemination. The volume fusion process generates the exact portion of the data in the desired area and at the desired resolution, thus provides a uniform and compact way for data communication and visualization through high performance network.
This project aims to develop immersive techniques and systems for 3D volume data exploration. The system will provide an immersive 3D interface for interactive data manipulation and visualization. Interaction techniques are developed for various levels of user immersion. Haptical techniques with force-feedback is also being investigated. CAVE and ImmersaDesk VR systems are used. A preliminary system, called 3DIVE, has been developed, in association with IU Advanced Visualization Laboratory (AVL). 3DIVE provides an immersive 3D interface for interactive data manipulation and visualization. It employs a volumetric region and object based method, and is able to provide interactive operations in a VR environment including transformation, slicing, image processing, region of interest extraction, transfer function design and editing, and multiple dataset handling. A collaborative module is also being developed. 3DIVE currently accepts 8-bit intensity data or 32-bit RGBA data. These data sets can be collected or generated from a variety of sources, such as MRI, CT, confocal microscope, and "in-house" volume graphics techniques. Equipment and personnel for this project are currently supported by the Indiana University Advanced Visualization Lab (AVL).
3DIVE : a prototype software system for 3D immersive and interactive volume object manipulation and visualization.