Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2010


A. Lex, M. Streit, C. Partl, K. Kashofer, D. Schmalstieg. “Comparative Analysis of Multidimensional, Quantitative Data,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 16, No. 6, pp. 1027--1035. 2010.

ABSTRACT

When analyzing multidimensional, quantitative data, the comparison of two or more groups of dimensions is a common task. Typical sources of such data are experiments in biology, physics or engineering, which are conducted in different configurations and use replicates to ensure statistically significant results.  One common way to analyze this data is to filter it using statistical methods and then run clustering algorithms to group similar values. The clustering results can be visualized using heat maps, which show differences between groups as changes in color. However, in cases where groups of dimensions have an a priori meaning, it is not desirable to cluster all dimensions combined, since a clustering algorithm can fragment continuous blocks of records.  Furthermore, identifying relevant elements in heat maps becomes  more difficult as the number of dimensions increases. To aid in such situations, we have developed Matchmaker, a visualization technique that allows researchers to arbitrarily  arrange and  compare multiple groups of dimensions at the same time.  We create separate groups of dimensions which can be clustered individually, and place them in an arrangement of heat maps reminiscent of parallel coordinates. To identify relations, we render bundled curves and ribbons between related records in different groups. We then allow interactive drill-downs using enlarged detail views of the data, which enable in-depth comparisons of clusters between groups. To reduce visual clutter, we minimize crossings between the views. This paper concludes with two case studies. The first demonstrates the value of our technique for the comparison of clustering algorithms. In the second, biologists use our system to investigate why certain strains of mice develop liver disease while others remain healthy, informally showing the efficacy of our system  when analyzing multidimensional data containing distinct groups of dimensions.



J. Li, D. Xiu. “Evaluation of Failure Probability via Surrogate Models,” In Journal of Computational Physics, Vol. 229, No. 23, pp. 8966--8980. 2010.
DOI: 10.1016/j.jcp.2010.08.022

ABSTRACT

Evaluation of failure probability of a given system requires sampling of the system response and can be computationally expensive. Therefore it is desirable to construct an accurate surrogate model for the system response and subsequently to sample the surrogate model. In this paper we discuss the properties of this approach. We demonstrate that the straightforward sampling of a surrogate model can lead to erroneous results, no matter how accurate the surrogate model is. We then propose a hybrid approach by sampling both the surrogate model in a “large” portion of the probability space and the original system in a “small” portion. The resulting algorithm is significantly more efficient than the traditional sampling method, and is more accurate and robust than the straightforward surrogate model approach. Rigorous convergence proof is established for the hybrid approach, and practical implementation is discussed. Numerical examples are provided to verify the theoretical findings and demonstrate the efficiency gain of the approach.

Keywords: Failure probability, Sampling, Polynomial chaos, Stochastic computation



W. Liu, P. Zhu, J.S. Anderson, D. Yurgelun-Todd, P.T. Fletcher. “Spatial Regularization of Functional Connectivity Using High-Dimensional Markov Random Fields,” In Medical Image Computing and Computer-Assisted Intervention (MICCAI 2010), Vol. 14, pp. 363--370. 2010.
PubMed ID: 20879336



Z. Liu, C. Goodlett, G. Gerig, M. Styner. “Evaluation of DTI Property Maps as Basis of DTI Atlas Building,” In SPIE Medical Imaging, Vol. 7623, 762325, February, 2010.
DOI: 10.1117/12.844911



Z. Liu, Y. Wang, G. Gerig, S. Gouttard, R. Tao, T. Fletcher, M.A. Styner. “Quality control of diffusion weighted images,” In SPIE Medical Imaging, Vol. 7628, 76280J, February, 2010.
DOI: 10.1117/12.844748



Y. Livnat, P. Gesteland, J. Benuzillo, W. Pettey, D. Bolton, F. Drews, H. Kramer, M. Samare. “A Novel Workbench for Epidemic investigation and Analysis of Search Strategies in public health practice,” In Proceedings of AMIA 2010 Annual Symposium, pp. 647--651. 2010.



M.A.S. Lizier, M.F. Siqueira, J.D. Daniels II, C.T. Silva, L.G. Nonato. “Template-based Remeshing for Image Decomposition,” In Proceedings of the 23rd SIBGRAPI Conference on Graphics, Patterns and Images, pp. 95--102. 2010.



J. Luitjens, M. Berzins. “Improving the Performance of Uintah: A Large-Scale Adaptive Meshing Computational Framework,” In Proceedings of the 24th IEEE International Parallel and Distributed Processing Symposium (IPDPS10), Atlanta, GA, pp. 1--10. 2010.
DOI: 10.1109/IPDPS.2010.5470437

ABSTRACT

Uintah is a highly parallel and adaptive multi-physics framework created by the Center for Simulation of Accidental Fires and Explosions in Utah. Uintah, which is built upon the Common Component Architecture, has facilitated the simulation of a wide variety of fluid-structure interaction problems using both adaptive structured meshes for the fluid and particles to model solids. Uintah was originally designed for, and has performed well on, about a thousand processors. The evolution of Uintah to use tens of thousands processors has required improvements in memory usage, data structure design, load balancing algorithms and cost estimation in order to improve strong and weak scalability up to 98,304 cores for situations in which the mesh used varies adaptively and also cases in which particles that represent the solids move from mesh cell to mesh cell.

Keywords: csafe, c-safe, scirun, uintah, fires, explosions, simulation



J. Luitjens, J. Guilkey, T. Harman, B. Worthen, S.G. Parker. “Adaptive Computations in the Uintah Framework,” In Advanced Computational Infastructures for Parallel/Distributed Adapative Applications, Ch. 1, Wiley Press, 2010.



J. Luitjens. “The Scalability of Parallel Adaptive Mesh Refinement Within Uintah,” School of Computing, University of Utah, 2010.

ABSTRACT

Solutions to Partial Differential Equations (PDEs) are often computed by discretizing the domain into a collection of computational elements referred to as a mesh. This solution is an approximation with an error that decreases as the mesh spacing decreases. However, decreasing the mesh spacing also increases the computational requirements. Adaptive mesh refinement (AMR) attempts to reduce the error while limiting the increase in computational requirements by refining the mesh locally in regions of the domain that have large error while maintaining a coarse mesh in other portions of the domain. This approach often provides a solution that is as accurate as that obtained from a much larger fixed mesh simulation, thus saving on both computational time and memory. However, historically, these AMR operations often limit the overall scalability of the application.

Adapting the mesh at runtime necessitates scalable regridding and load balancing algorithms. This dissertation analyzes the performance bottlenecks for a widely used regridding algorithm and presents two new algorithms which exhibit ideal scalability. In addition, a scalable space-filling curve generation algorithm for dynamic load balancing is also presented. The performance of these algorithms is analyzed by determining their theoretical complexity, deriving performance models, and comparing the observed performance to those performance models. The models are then used to predict performance on larger numbers of processors. This analysis demonstrates the necessity of these algorithms at larger numbers of processors. This dissertation also investigates methods to more accurately predict workloads based on measurements taken at runtime. While the methods used are not new, the application of these methods to the load balancing process is. These methods are shown to be highly accurate and able to predict the workload within 3% error. By improving the accuracy of these estimations, the load imbalance of the simulation can be reduced, thereby increasing the overall performance.

Finally, the scalability of AMR simulations as a whole using these algorithms is tested within the Uintah computational framework. Scalability tests are performed using up to 98,304 processors and nearly ideal scalability is demonstrated.



C. Mahnkopf, T.J. Badger, N.S. Burgon, M. Daccarett, T.S. Haslam, C.T. Badger, C.J. McGann, N. Akoum, E. Kholmovski, R.S. Macleod, N.F. Marrouche. “Evaluation of the left atrial substrate in patients with lone atrial fibrillation using delayed-enhanced MRI: implications for disease progression and response to catheter ablation,” In Heart Rhythm, Vol. 7, No. 10, pp. 1475--1481. 2010.
PubMed ID: 20601148



C. Marc, C. Vachet, J.E. Blocher, G. Gerig, J.H. Gilmore, M.A. Styner. “Changes of MR and DTI appearance in early human brain development,” In Proceedings of SPIE Medical Imaging 7623, 762324, 2010.
DOI: 10.1117/12.844912



Q. Meng, J. Luitjens, M. Berzins. “Dynamic Task Scheduling for Scalable Parallel AMR in the Uintah Framework,” SCI Technical Report, No. UUSCI-2010-001, SCI Institute, University of Utah, 2010.



Q. Meng, J. Luitjens, M. Berzins. “Dynamic Task Scheduling for the Uintah Framework,” In Proceedings of the 3rd IEEE Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS10), pp. 1--10. 2010.
DOI: 10.1109/MTAGS.2010.5699431

ABSTRACT

Uintah is a computational framework for fluid-structure interaction problems using a combination of the ICE fluid flow algorithm, adaptive mesh refinement (AMR) and MPM particle methods. Uintah uses domain decomposition with a task-graph approach for asynchronous communication and automatic message generation. The Uintah software has been used for a decade with its original task scheduler that ran computational tasks in a predefined static order. In order to improve the performance of Uintah for petascale architecture, a new dynamic task scheduler allowing better overlapping of the communication and computation is designed and evaluated in this study. The new scheduler supports asynchronous, out-of-order scheduling of computational tasks by putting them in a distributed directed acyclic graph (DAG) and by isolating task memory and keeping multiple copies of task variables in a data warehouse when necessary. A new runtime system has been implemented with a two-stage priority queuing architecture to improve the scheduling efficiency. The effectiveness of this new approach is shown through an analysis of the performance of the software on large scale fluid-structure examples.



M.D. Meyer, T. Munzner, A. DePace, H. Pfister. “MulteeSum: A Tool for Comparative Spatial and Temporal Gene Expression Data,” In IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis 2010), Vol. 16, No. 6, pp. 908--917. 2010.

ABSTRACT

Cells in an organism share the same genetic information in their DNA, but have very different forms and behavior because of the selective expression of subsets of their genes. The widely used approach of measuring gene expression over time from a tissue sample using techniques such as microarrays or sequencing do not provide information about the spatial position within the tissue where these genes are expressed. In contrast, we are working with biologists who use techniques that measure gene expression in every individual cell of entire fruitfly embryos over an hour of their development, and do so for multiple closely-related subspecies of Drosophila. These scientists are faced with the challenge of integrating temporal gene expression data with the spatial location of cells and, moreover, comparing this data across multiple related species. We have worked with these biologists over the past two years to develop MulteeSum, a visualization system that supports inspection and curation of data sets showing gene expression over time, in conjunction with the spatial location of the cells where the genes are expressed — it is the first tool to support comparisons across multiple such data sets. MulteeSum is part of a general and flexible framework we developed with our collaborators that is built around multiple summaries for each cell, allowing the biologists to explore the results of computations that mix spatial information, gene expression measurements over time, and data from multiple related species or organisms. We justify our design decisions based on specific descriptions of the analysis needs of our collaborators, and provide anecdotal evidence of the efficacy of MulteeSum through a series of case studies.



M.D. Meyer, B. Wong, M. Styczynski, T. Munzner, H. Pfister. “Pathline: A Tool for Comparative Functional Genomics,” In Computer Graphics Forum, Vol. 29, No. 3, Wiley-Blackwell, pp. 1043--1052. Aug, 2010.
DOI: 10.1111/j.1467-8659.2009.01710.x

ABSTRACT

Biologists pioneering the new field of comparative functional genomics attempt to infer the mechanisms of gene regulation by looking for similarities and differences of gene activity over time across multiple species. They use three kinds of data: functional data such as gene activity measurements, pathway data that represent a series of reactions within a cellular process, and phylogenetic relationship data that describe the relatedness of species. No existing visualization tool can visually encode the biologically interesting relationships between multiple pathways, multiple genes, and multiple species. We tackle the challenge of visualizing all aspects of this comparative functional genomics dataset with a new interactive tool called Pathline. In addition to the overall characterization of the problem and design of Pathline, our contributions include two new visual encoding techniques. One is a new method for linearizing metabolic pathways that provides appropriate topological information and supports the comparison of quantitative data along the pathway. The second is the curvemap view, a depiction of time series data for comparison of gene activity and metabolite levels across multiple species. Pathline was developed in close collaboration with a team of genomic scientists. We validate our approach with case studies of the biologists' use of Pathline and report on how they use the tool to confirm existing findings and to discover new scientific insights.



H. Mirzaee, J.K. Ryan, R.M. Kirby. “Quantificiation of Errors Introduced in the Numerical Approximation and Implementation of Smoothness-Increasing Accuracy Conserving (SIAC) Filtering of Discontinuous Galerkin (DG) Fields,” In Journal of Scientific Computing, Vol. 45, pp. 447-470. 2010.



A.R.C. Paiva, T. Tasdizen. “Fast Semi-Supervised Image Segmentation by Novelty Selection,” In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, Texas, pp. 1054--1057. March, 2010.
DOI: 10.1109/ICASSP.2010.5495333



A.R.C. Paiva, I. Park, J.C. Principe. “Inner products for representation and learning in the spike train domain,” In Statistical Signal Processing for Neuroscience and Neurotechnology, Ch. 8, Edited by Karim G. Oweiss, Elsevier, pp. 265--309. 2010.
DOI: 10.1016/b978-0-12-375027-3.00008-9



A.R.C. Paiva, I. Park, J.C. Principe. “Optimization in Reproducing Kernel Hilbert Spaces of Spike Trains,” In Computational Neuroscience, Edited by W. Chaovalitwongse et al., Springer, pp. 3--29. 2010.
ISBN: 978-0-387-88629-9
DOI: 10.1007/978-0-387-88630-5_1