Collaborative Remote Visualization
Introduction
In the last few years, scientists and researchers have given a great deal of attention to the area of remote visualization of scientific datasets within collaborative environments. This recent interest has been fueled by the use of interactive viewing as the primary means by which researchers explore large datasets. However, researchers often need to extend this interactivity in order to collaborate with colleagues who are geographically separated. Most current remote visualization tools allow multiple parties to view images from different locations, but pose problems with efficiency and user interactivity.First Year ESP Recipients Walk at Graduation
Established in 1999 by Chris Johnson and the College of Engineering, the Engineering Scholars Program is a scholarship opportunity for incoming freshmen interested in aspects of engineering and computer science. The quest of the ESP is to leverage the exciting engineering-based research at the University by exposing the first year students to actual research activities. The addition of a research component to the first year student's experience allows the student to see over the horizon offered by prerequisite courses. By giving the first year student a more in-depth look into engineering we propose to increase the enthusiasm level of the students with the result being significantly improved retention rates. Higher retention rates translate into a larger and better-trained graduating class of much-needed engineers and computer scientists.
These students are the best and brightest with an average GPA of 3.96 and an average ACT score of 31. Now in its fourth year, the number of scholarship recipients has increased from nine in 1999 to fifteen in 2002.
Star-Ray Interactive Ray-Tracer Debuts at SIGGRAPH 2002
(left to right) Living Room Scene, Graphics Museum, Science Room, Galaxy Room, Atlantis Scene
Parametric Method for Correction of Intensity Inhomogeneity in MRI Data
Intensity inhomogeneity is one of the main obstacles for MRI data post processing. The problem requires retrospective correction due to the strong dependence of the inhomogeneity on patient anatomy and the accompanying acquisition protocol. We have developed a new method for correcting the inhomogeneities using a polynomial estimation of the bias field. The method minimizes the composite energy function to find parameters of the polynomial model. The energy function is designed to provide a robust estimation of the bias field by combining measures from histogram analysis and local gradient estimation. The method was validated on a wide range of MRI data obtained with coils of different types and under different acquisition protocols.
The developed method provides reliable estimation of the intensity inhomogeneities in MRI data. The correction times are dependant on the number of parameters the model used, the dataset size and the degree of subsampling in estimation of both local and global terms and vary from 1 to 5 minutes using a mid range PC.
Segmenting Tomographic Data
Traditionally, processing tomographic data begins with reconstructing volumes. However, when the tomographic data is incomplete, noisy, or misregistered tomographic reconstruction can produce artifacts in the volume, which makes subsequent segmentation and visualization more difficult. Researchers in the SCI institute are developing direct methods for segmenting tomographic data. The strategy is to fit 3D surface models directly to the tomographic projects, rather than the volume reconstructions. In this way, the surface fitting is not influenced by reconstruction artifacts. Implementing this strategy requires several technical advances. First is a mathematical formulation that relates object shape directly to tomographic projections. This results in a description of how surfaces should deform in order to match the tomographic data. The second advance is the use of a surface modeling technology that can accommodate a wide variety of shapes and support incremental deformations. This is done using 3D level-set models, which results in a 3D partial differential equation (PDE). The final advance is development of computational schemes that allow us to solve these PDE's efficiently. For these we have developed the incremental projection method which significantly reduces the amount of computation needed to deform these 3D surface models.
Immersive Visualization: A Research Update
What is Immersive Visualization?
The goal of visualization is to aid in the understanding of complex scientific data, typically using techniques from the fields of computer graphics and animation. To gain additional insight, immersive visualization places the user directly within the data space through virtual reality technology. The resulting immersive experience allows exploration from a first-person perspective, in contrast to the third-person interaction typical of desktop environments.The feeling of actually "being there" in a virtual environment, also known as presence, is created by fooling one or more of the user's senses with computer generated cues. In a typical system, stereo images provide a sense of visual depth and natural interaction is achieved with tracking sensors and 3D input devices. More advanced systems may also include force-feedback, spatialized audio, and/or voice recognition capabilities to increase the sense of presence.
Imagine directly navigating through a scientific dataset, much like one experiences the real world. How would it feel to investigate the interesting features of the data with your sense of touch? Would this capability be useful or not? These are the types of questions Virtual Reality (VR) researchers at the SCI institute currently seek to answer.
The Simian Project
SCIRun/BioPSE: Problem Solving Environments for the Next Generation of Scientific Computing
"I am excited about the SCIRun and BioPSE software releases," says the Institute's director Christopher R. Johnson. "We have been working on SCIRun since 1992. What started out as software designed by a few people (Steve Parker with help from David Weinstein) has grown into a substantial software effort with more than 50 contributors." Johnson continues, "BioPSE has been a focused software effort since 1999. I am thrilled to see SCIRun and BioPSE released and look forward to seeing how scientists and engineers use these software packages to solve their application domain problems."
The Science and Application of Complex Meshes
Part 1. Unstructured Meshes in Entertainment and Engineering
Tomb Raider |
If you have played just about any modern Nintendo(tm) or Playstation (tm) computer game, then you have encountered meshes. Many games make heavy use of what are called polygonal surface meshes, or surfaces built up out of polygons. They are used in the models of many of the people and cars and other 3D things within the game. Polygonal models are a good way for the game designers to get what they want out of the hardware inside the computer or whatever you are using to play the game. I'll explain what a mesh actually is, how it is constructed, and how engineers use meshes to solve problems.
The Mathematics Behind Imaging
An interesting, but very challenging kind of imaging is to visualize the interior of a non-transparent object (such as a human body) using physical fields measured outside the body. This imaging is achieved through a mathematical engine known as "inverse problems solving" or "statistical optimization", one of the key research directions at the SCI Institute.
Another research direction being pursued at the SCI Institute is solving "ill-posed" imaging problems by constraining the solution with focusing criterion. This allows us to reconstruct sharp solutions out of smooth data. The process is made possible by selecting special stabilizing functions that permit sharp solutions. Applications of this technique range from bioelectric source localization utilizing magneto and electro -encephalography data for medical imaging, to geophysical inversions with gravity and magnetic fields.
Figure 1 | Figure 2 | Figure 3 |