Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.
August 27, 2015 - Scott Gibson, Communications Specialist, University of Tennessee

More elegant techniques combined with highly interdisciplinary, multi-scale collaboration are essential for dealing with massive amounts of information, plenary speaker says at the XSEDE15 conference.

A curse of dealing with mounds of data so massive that they require special tools, said computer scientist Valerio Pascucci, is if you look for something, you will probably find it, thus injecting bias into the analysis.
XSEDE15 pascucci-sg

XSEDE15 pascucci sgIn his plenary talk titled "Extreme Data Management Analysis and Visualization: Exploring Large Data for Science Discovery" on July 28 during the XSEDE15 conference in St. Louis, Dr. Pascucci said that getting clean, guaranteed, unbiased results in data analyses requires highly interdisciplinary, multi-scale collaboration and techniques that unify the math and computer science behind the applications used in physics, biology, and medicine.

The techniques and use cases he shared during his talk reflected about a decade and a half of getting down in the trenches to understand research efforts in disparate scientific domains, cutting through semantics, and capturing extensible mathematical foundations that could be applied in developing robust, efficient algorithms and applications.

Fewer Tools but Greater Utility

"You can build an economy of tools by deconstructing the math, looking for commonalities, and developing fewer tools that can be of use to more people," Pascucci said in a post-talk interview. And to avoid developing biased algorithms, "you try to delay as long as possible application development." The goal, he noted, is to create techniques that leave little room for mental shortcuts, or heuristics, and emphasize a formalized mathematical approach.

Creating those techniques, however, requires cross-pollination between, and integration of, data management and data analysis, tasks that have traditionally been performed by different communities, Pascucci pointed out. Collaboration that combines those efforts, he added, is a necessary ingredient for a successful supercomputing center or cyberinfrastructure—a dynamic ecosystem of people, advanced computing systems, data, and software.

Processing on the Fly

In managing large datasets, a platform for processing on the fly is important, said Pascucci, because researchers need to be able to make decisions under incomplete information. "This is something that people often underestimate," he added.

One innovation that Pascucci and his colleagues at the Center for Extreme Data Management, Analysis, and Visualization (CEDMAV) at the University of Utah have developed is a framework, called ViSUS, for processing large-scale scientific data with high-performance selective queries on multiple terabytes of raw data. This data model is combined with progressive streaming techniques that allow processing on a variety of computing devices, from iPhone to simple workstations, to the input/output of parallel supercomputers. The framework has, for example, enabled the real-time streaming of massive combustion simulations from U.S. Department of Energy platforms to national laboratories.

Key Infrastructure Elements and the Big Picture

Pascucci's talk described performance, user access, analytics, and applications as being the key elements of a computational infrastructure for scientific discovery, and it delved into each area with demonstrations and use cases—from interactive remote analysis and visualization of 6 terabytes of imaging data, to scaling in-situ analytics, and discussing data abstractions and visual metaphors, as well as examples of simulations, experiments, and data collection associated with climate, combustion, astrophysics, microscopy, Twitter, and more.

The analytics technique of topology, the area of mathematics concerned with the properties of spaces, was a subject of emphasis as a very good complement to statistics. Topology, Pascucci explained, is particularly adept at showing local and global trends and describing shapes with great precision, but the numerical principles are also extensible to the analyses of other types of data. He presented the application of a discrete topological framework for the representation and analysis of large-scale scientific data.

In the big picture of dealing with massive amounts of information, integrated management, analysis, and visualization of large data can be a catalyst for a virtuous cycle of collaborative activities involving basic research, software deployment, user support, and commercialization; and a wide spectrum of interdisciplinary collaborations can have a positive effective relative to motivating the work, providing formal theoretical approaches, and providing feedback to specific disciplines, Pascucci said.

Pascucci is the founding director of CEDMAV; a professor at the Scientific Computing and Imaging Institute, and the School of Computing, the University of Utah; a laboratory fellow at Pacific Northwest National Laboratory; and chief technology officer at Visus LLC (visus.net), a spin-off of the University of Utah.

Original article appears in HPC Wire