SCI Publications
2010
N. Lange, M.B. Dubray, J.E. Lee, M.P. Froimowitz, A. Froehlich, N. Adluru, B. Wright, C. Ravichandran, P.T. Fletcher, E.D. Bigler, A.L. Alexander, J.E. Lainhart.
Atypical diffusion tensor hemispheric asymmetry in autism, In Autism Research, Vol. 3, No. 6, pp. 350--358. 2010.
DOI: 10.1002/aur.162
PubMed ID: 21182212
J.A. Levine, D.J. Swenson, Z. Fu, R.S. MacLeod, R.T. Whitaker.
A Comparison of Delaunay Based Meshing Algorithms for Electrophysiological Cardiac Simulations, In Virtual Physiological Human, pp. 181--183. 2010.
A. Lex, M. Streit, E. Kruijff, D. Schmalstieg.
Caleydo: Design and Evaluation of a Visual Analysis Framework for Gene Expression Data in its Biological Context, In Proceeding of the IEEE Symposium on Pacific Visualization (PacificVis '10), pp. 57--64. 2010.
ISBN: 424466856
DOI: 10.1109/PACIFICVIS.2010.5429609
The goal of our work is to support experts in the process of hypotheses generation concerning the roles of genes in diseases. For a deeper understanding of the complex interdependencies between genes, it is important to bring gene expressions (measurements) into context with pathways. Pathways, which are models of biological processes, are available in online databases. In these databases, large networks are decomposed into small sub-graphs for better manageability. This simplification results in a loss of context, as pathways are interconnected and genes can occur in multiple instances scattered over the network. Our main goal is therefore to present all relevant information, i.e., gene expressions, the relations between expression and pathways and between multiple pathways in a simple, yet effective way. To achieve this we employ two different multiple-view approaches. Traditional multiple views are used for large datasets or highly interactive visualizations, while a 2.5D technique is employed to support a seamless navigation of multiple pathways which simultaneously links to the expression of the contained genes. This approach facilitates the understanding of the interconnection of pathways, and enables a non-distracting relation to gene expression data. We evaluated Caleydo with a group of users from the life science community. Users were asked to perform three tasks: pathway exploration, gene expression analysis and information comparison with and without visual links, which had to be conducted in four different conditions. Evaluation results show that the system can improve the process of understanding the complex network of pathways and the individual effects of gene expression regulation considerably. Especially the quality of the available contextual information and the spatial organization was rated good for the presented 2.5D setup.
A. Lex, M. Streit, C. Partl, K. Kashofer, D. Schmalstieg.
Comparative Analysis of Multidimensional, Quantitative Data, In IEEE Transactions on Visualization and Computer Graphics, Vol. 16, No. 6, pp. 1027--1035. 2010.
When analyzing multidimensional, quantitative data, the comparison of two or more groups of dimensions is a common task. Typical sources of such data are experiments in biology, physics or engineering, which are conducted in different configurations and use replicates to ensure statistically significant results. One common way to analyze this data is to filter it using statistical methods and then run clustering algorithms to group similar values. The clustering results can be visualized using heat maps, which show differences between groups as changes in color. However, in cases where groups of dimensions have an a priori meaning, it is not desirable to cluster all dimensions combined, since a clustering algorithm can fragment continuous blocks of records. Furthermore, identifying relevant elements in heat maps becomes more difficult as the number of dimensions increases. To aid in such situations, we have developed Matchmaker, a visualization technique that allows researchers to arbitrarily arrange and compare multiple groups of dimensions at the same time. We create separate groups of dimensions which can be clustered individually, and place them in an arrangement of heat maps reminiscent of parallel coordinates. To identify relations, we render bundled curves and ribbons between related records in different groups. We then allow interactive drill-downs using enlarged detail views of the data, which enable in-depth comparisons of clusters between groups. To reduce visual clutter, we minimize crossings between the views. This paper concludes with two case studies. The first demonstrates the value of our technique for the comparison of clustering algorithms. In the second, biologists use our system to investigate why certain strains of mice develop liver disease while others remain healthy, informally showing the efficacy of our system when analyzing multidimensional data containing distinct groups of dimensions.
J. Li, D. Xiu.
Evaluation of Failure Probability via Surrogate Models, In Journal of Computational Physics, Vol. 229, No. 23, pp. 8966--8980. 2010.
DOI: 10.1016/j.jcp.2010.08.022
Evaluation of failure probability of a given system requires sampling of the system response and can be computationally expensive. Therefore it is desirable to construct an accurate surrogate model for the system response and subsequently to sample the surrogate model. In this paper we discuss the properties of this approach. We demonstrate that the straightforward sampling of a surrogate model can lead to erroneous results, no matter how accurate the surrogate model is. We then propose a hybrid approach by sampling both the surrogate model in a “large” portion of the probability space and the original system in a “small” portion. The resulting algorithm is significantly more efficient than the traditional sampling method, and is more accurate and robust than the straightforward surrogate model approach. Rigorous convergence proof is established for the hybrid approach, and practical implementation is discussed. Numerical examples are provided to verify the theoretical findings and demonstrate the efficiency gain of the approach.
Keywords: Failure probability, Sampling, Polynomial chaos, Stochastic computation
W. Liu, P. Zhu, J.S. Anderson, D. Yurgelun-Todd, P.T. Fletcher.
Spatial Regularization of Functional Connectivity Using High-Dimensional Markov Random Fields, In Medical Image Computing and Computer-Assisted Intervention (MICCAI 2010), Vol. 14, pp. 363--370. 2010.
PubMed ID: 20879336
Z. Liu, C. Goodlett, G. Gerig, M. Styner.
Evaluation of DTI Property Maps as Basis of DTI Atlas Building, In SPIE Medical Imaging, Vol. 7623, 762325, February, 2010.
DOI: 10.1117/12.844911
Z. Liu, Y. Wang, G. Gerig, S. Gouttard, R. Tao, T. Fletcher, M.A. Styner.
Quality control of diffusion weighted images, In SPIE Medical Imaging, Vol. 7628, 76280J, February, 2010.
DOI: 10.1117/12.844748
Y. Livnat, P. Gesteland, J. Benuzillo, W. Pettey, D. Bolton, F. Drews, H. Kramer, M. Samare.
A Novel Workbench for Epidemic investigation and Analysis of Search Strategies in public health practice, In Proceedings of AMIA 2010 Annual Symposium, pp. 647--651. 2010.
M.A.S. Lizier, M.F. Siqueira, J.D. Daniels II, C.T. Silva, L.G. Nonato.
Template-based Remeshing for Image Decomposition, In Proceedings of the 23rd SIBGRAPI Conference on Graphics, Patterns and Images, pp. 95--102. 2010.
J. Luitjens, M. Berzins.
Improving the Performance of Uintah: A Large-Scale Adaptive Meshing Computational Framework, In Proceedings of the 24th IEEE International Parallel and Distributed Processing Symposium (IPDPS10), Atlanta, GA, pp. 1--10. 2010.
DOI: 10.1109/IPDPS.2010.5470437
Uintah is a highly parallel and adaptive multi-physics framework created by the Center for Simulation of Accidental Fires and Explosions in Utah. Uintah, which is built upon the Common Component Architecture, has facilitated the simulation of a wide variety of fluid-structure interaction problems using both adaptive structured meshes for the fluid and particles to model solids. Uintah was originally designed for, and has performed well on, about a thousand processors. The evolution of Uintah to use tens of thousands processors has required improvements in memory usage, data structure design, load balancing algorithms and cost estimation in order to improve strong and weak scalability up to 98,304 cores for situations in which the mesh used varies adaptively and also cases in which particles that represent the solids move from mesh cell to mesh cell.
Keywords: csafe, c-safe, scirun, uintah, fires, explosions, simulation
J. Luitjens, J. Guilkey, T. Harman, B. Worthen, S.G. Parker.
Adaptive Computations in the Uintah Framework, In Advanced Computational Infastructures for Parallel/Distributed Adapative Applications, Ch. 1, Wiley Press, 2010.
J. Luitjens.
The Scalability of Parallel Adaptive Mesh Refinement Within Uintah, School of Computing, University of Utah, 2010.
Solutions to Partial Differential Equations (PDEs) are often computed by discretizing the domain into a collection of computational elements referred to as a mesh. This solution is an approximation with an error that decreases as the mesh spacing decreases. However, decreasing the mesh spacing also increases the computational requirements. Adaptive mesh refinement (AMR) attempts to reduce the error while limiting the increase in computational requirements by refining the mesh locally in regions of the domain that have large error while maintaining a coarse mesh in other portions of the domain. This approach often provides a solution that is as accurate as that obtained from a much larger fixed mesh simulation, thus saving on both computational time and memory. However, historically, these AMR operations often limit the overall scalability of the application.
Adapting the mesh at runtime necessitates scalable regridding and load balancing algorithms. This dissertation analyzes the performance bottlenecks for a widely used regridding algorithm and presents two new algorithms which exhibit ideal scalability. In addition, a scalable space-filling curve generation algorithm for dynamic load balancing is also presented. The performance of these algorithms is analyzed by determining their theoretical complexity, deriving performance models, and comparing the observed performance to those performance models. The models are then used to predict performance on larger numbers of processors. This analysis demonstrates the necessity of these algorithms at larger numbers of processors. This dissertation also investigates methods to more accurately predict workloads based on measurements taken at runtime. While the methods used are not new, the application of these methods to the load balancing process is. These methods are shown to be highly accurate and able to predict the workload within 3% error. By improving the accuracy of these estimations, the load imbalance of the simulation can be reduced, thereby increasing the overall performance.
Finally, the scalability of AMR simulations as a whole using these algorithms is tested within the Uintah computational framework. Scalability tests are performed using up to 98,304 processors and nearly ideal scalability is demonstrated.
C. Mahnkopf, T.J. Badger, N.S. Burgon, M. Daccarett, T.S. Haslam, C.T. Badger, C.J. McGann, N. Akoum, E. Kholmovski, R.S. Macleod, N.F. Marrouche.
Evaluation of the left atrial substrate in patients with lone atrial fibrillation using delayed-enhanced MRI: implications for disease progression and response to catheter ablation, In Heart Rhythm, Vol. 7, No. 10, pp. 1475--1481. 2010.
PubMed ID: 20601148
C. Marc, C. Vachet, J.E. Blocher, G. Gerig, J.H. Gilmore, M.A. Styner.
Changes of MR and DTI appearance in early human brain development, In Proceedings of SPIE Medical Imaging 7623, 762324, 2010.
DOI: 10.1117/12.844912
Q. Meng, J. Luitjens, M. Berzins.
Dynamic Task Scheduling for Scalable Parallel AMR in the Uintah Framework, SCI Technical Report, No. UUSCI-2010-001, SCI Institute, University of Utah, 2010.
Q. Meng, J. Luitjens, M. Berzins.
Dynamic Task Scheduling for the Uintah Framework, In Proceedings of the 3rd IEEE Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS10), pp. 1--10. 2010.
DOI: 10.1109/MTAGS.2010.5699431
Uintah is a computational framework for fluid-structure interaction problems using a combination of the ICE fluid flow algorithm, adaptive mesh refinement (AMR) and MPM particle methods. Uintah uses domain decomposition with a task-graph approach for asynchronous communication and automatic message generation. The Uintah software has been used for a decade with its original task scheduler that ran computational tasks in a predefined static order. In order to improve the performance of Uintah for petascale architecture, a new dynamic task scheduler allowing better overlapping of the communication and computation is designed and evaluated in this study. The new scheduler supports asynchronous, out-of-order scheduling of computational tasks by putting them in a distributed directed acyclic graph (DAG) and by isolating task memory and keeping multiple copies of task variables in a data warehouse when necessary. A new runtime system has been implemented with a two-stage priority queuing architecture to improve the scheduling efficiency. The effectiveness of this new approach is shown through an analysis of the performance of the software on large scale fluid-structure examples.
M.D. Meyer, T. Munzner, A. DePace, H. Pfister.
MulteeSum: A Tool for Comparative Spatial and Temporal Gene Expression Data, In IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis 2010), Vol. 16, No. 6, pp. 908--917. 2010.
M.D. Meyer, B. Wong, M. Styczynski, T. Munzner, H. Pfister.
Pathline: A Tool for Comparative Functional Genomics, In Computer Graphics Forum, Vol. 29, No. 3, Wiley-Blackwell, pp. 1043--1052. Aug, 2010.
DOI: 10.1111/j.1467-8659.2009.01710.x
Biologists pioneering the new field of comparative functional genomics attempt to infer the mechanisms of gene regulation by looking for similarities and differences of gene activity over time across multiple species. They use three kinds of data: functional data such as gene activity measurements, pathway data that represent a series of reactions within a cellular process, and phylogenetic relationship data that describe the relatedness of species. No existing visualization tool can visually encode the biologically interesting relationships between multiple pathways, multiple genes, and multiple species. We tackle the challenge of visualizing all aspects of this comparative functional genomics dataset with a new interactive tool called Pathline. In addition to the overall characterization of the problem and design of Pathline, our contributions include two new visual encoding techniques. One is a new method for linearizing metabolic pathways that provides appropriate topological information and supports the comparison of quantitative data along the pathway. The second is the curvemap view, a depiction of time series data for comparison of gene activity and metabolite levels across multiple species. Pathline was developed in close collaboration with a team of genomic scientists. We validate our approach with case studies of the biologists' use of Pathline and report on how they use the tool to confirm existing findings and to discover new scientific insights.
H. Mirzaee, J.K. Ryan, R.M. Kirby.
Quantificiation of Errors Introduced in the Numerical Approximation and Implementation of Smoothness-Increasing Accuracy Conserving (SIAC) Filtering of Discontinuous Galerkin (DG) Fields, In Journal of Scientific Computing, Vol. 45, pp. 447-470. 2010.
Page 66 of 142