Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Large scale visualization on the Powerwall.
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2017


J. Jiang, Y. Chen, A. Narayan. “Offline-Enhanced Reduced Basis Method through adaptive construction of the Surrogate Parameter Domain,” In Journal of Scientific Computing, Vol. 73, No. 2-3, pp. 853--875. 2017.
ISSN: 0885-7474, 1573-7691
DOI: 10.1007/s10915-017-0551-3

ABSTRACT

The Reduced Basis Method (RBM) is a popular certified model reduction approach for solving parametrized partial differential equations. One critical stage of the \textitoffline portion of the algorithm is a greedy algorithm, requiring maximization of an error estimate over parameter space. In practice this maximization is usually performed by replacing the parameter domain continuum with a discrete "training" set. When the dimension of parameter space is large, it is necessary to significantly increase the size of this training set in order to effectively search parameter space. Large training sets diminish the attractiveness of RBM algorithms since this proportionally increases the cost of the offline phase.

In this work we propose novel strategies for offline RBM algorithms that mitigate the computational difficulty of maximizing error estimates over a training set. The main idea is to identify a subset of the training set, a "surrogate parameter domain" (SPD), on which to perform greedy algorithms. The SPD's we construct are much smaller in size than the full training set, yet our examples suggest that they are accurate enough to represent the solution manifold of interest at the current offline RBM iteration. We propose two algorithms to construct the SPD: Our first algorithm, the Successive Maximization Method (SMM) method, is inspired by inverse transform sampling for non-standard univariate probability distributions. The second constructs an SPD by identifying pivots in the Cholesky Decomposition of an approximate error correlation matrix. We demonstrate the algorithm through numerical experiments, showing that the algorithm is capable of accelerating offline RBM procedures without degrading accuracy, assuming that the solution manifold has low Kolmogorov width.



M. Kern, A. Lex, N. Gehlenborg, C. R. Johnson. “Interactive Visual Exploration And Refinement Of Cluster Assignments,” In BMC Bioinformatics, Cold Spring Harbor Laboratory, April, 2017.
DOI: 10.1101/123844

ABSTRACT

Background:
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data.

Results:
In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes.

Conclusions:
Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.



S. Kumar, D. Hoang, S. Petruzza, J. Edwards, V. Pascucci. “Reducing Network Congestion and Synchronization Overhead During Aggregation of Hierarchical Data,” In 2017 IEEE 24th International Conference on High Performance Computing (HiPC), pp. 223-232. Dec, 2017.
DOI: 10.1109/HiPC.2017.00034

ABSTRACT

Hierarchical data representations have been shown to be effective tools for coping with large-scale scientific data. Writing hierarchical data on supercomputers, however, is challenging as it often involves all-to-one communication during aggregation of low-resolution data which tends to span the entire network domain, resulting in several bottlenecks. We introduce the concept of indexing templates, which succinctly describe data organization and can be used to alter movement of data in beneficial ways. We present two techniques, domain partitioning and localized aggregation, that leverage indexing templates to alleviate congestion and synchronization overheads during data aggregation. We report experimental results that show significant I/O speedup using our proposed schemes on two of today's fastest supercomputers, Mira and Shaheen II, using the Uintah and S3D simulation frameworks.



A. Narayan, J. Jakeman, T. Zhou. “A Christoffel function weighted least squares algorithm for collocation approximations,” In Mathematics of Computation, Vol. 86, No. 306, pp. 1913--1947. 2017.
ISSN: 0025-5718, 1088-6842
DOI: 10.1090/mcom/3192

ABSTRACT

We propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation frame- work. Our method is motivated by generalized Polynomial Chaos approximation in uncertainty quantification where a polynomial approximation is formed from a combination of orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density of orthogonality. Our proposed algorithm samples with respect to the equilibrium measure of the parametric domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis to motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.



T.A.J. Ouermi, A. Knoll, R.M. Kirby, M. Berzins. “OpenMP 4 Fortran Modernization of WSM6 for KNL,” In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC17, No. 12, ACM, pp. 12:1--12:8. 2017.
ISBN: 978-1-4503-5272-7
DOI: 10.1145/3093338.3093387

ABSTRACT

Parallel code portability in the petascale era requires modifying existing codes to support new architectures with large core counts and SIMD vector units. OpenMP is a well established and increasingly supported vehicle for portable parallelization. As architectures mature and compiler OpenMP implementations evolve, best practices for code modernization change as well. In this paper, we examine the impact of newer OpenMP features (in particular OMP SIMD) on the Intel Xeon Phi Knights Landing (KNL) architecture, applied in optimizing loops in the single moment 6-class microphysics module (WSM6) in the US Navy's NEPTUNE code. We find that with functioning OMP SIMD constructs, low thread invocation overhead on KNL and reduced penalty for unaligned access compared to previous architectures, one can leverage OpenMP 4 to achieve reasonable scalability with relatively minor reorganization of a production physics code.



T.A.J. Ouermi, A. Knoll, R.M. Kirby, M. Berzins. “Optimization Strategies for WRF Single-Moment 6-Class Microphysics Scheme (WSM6) on Intel Microarchitectures,” In Proceedings of the fifth international symposium on computing and networking (CANDAR 17). Awarded Best Paper , IEEE, 2017.

ABSTRACT

Optimizations in the petascale era require modifications of existing codes to take advantage of new architectures with large core counts and SIMD vector units. This paper examines high-level and low-level optimization strategies for numerical weather prediction (NWP) codes. These strategies employ thread-local structures of arrays (SOA) and an OpenMP directive such as OMP SIMD. These optimization approaches are applied to the Weather Research Forecasting single-moment 6-class microphysics schemes (WSM6) in the US Navy NEPTUNE system. The results of this study indicate that the high-level approach with SOA and low-level OMP SIMD improves thread and vector parallelism by increasing data and temporal locality. The modified version of WSM6 runs 70x faster than the original serial code. This improvement is about 23.3x faster than the performance achieved by Ouermi et al., and 14.9x faster than the performance achieved by Michalakes et al.



B. Peterson, A. Humphrey, J. Schmidt, M. Berzins. “Addressing Global Data Dependencies in Heterogeneous Asynchronous Runtime Systems on GPUs. Awarded Best Paper,” In Proceedings of the Third International Workshop on Extreme Scale Programming Models and Middleware - ESPM2'17, ACM, 2017.
DOI: 10.1145/3152041.3152082

ABSTRACT

Large-scale parallel applications with complex global data dependencies beyond those of reductions pose significant scalability challenges in an asynchronous runtime system. Internodal challenges include identifying the all-to-all communication of data dependencies among the nodes. Intranodal challenges include gathering together these data dependencies into usable data objects while avoiding data duplication. This paper addresses these challenges within the context of a large-scale, industrial coal boiler simulation using the Uintah asynchronous many-task runtime system on GPU architectures. We show significant reduction in time spent analyzing data dependencies through refinements in our dependency search algorithm. Multiple task graphs are used to eliminate subsequent analysis when task graphs change in predictable and repeatable ways. Using a combined data store and task scheduler redesign reduces data dependency duplication ensuring that problems fit within host and GPU memory. These modifications did not require any changes to application code or sweeping changes to the Uintah runtime system. We report results running on the DOE Titan system on 119K CPU cores and 7.5K GPUs simultaneously. Our solutions can be generalized to other task dependency problems with global dependencies among thousands of nodes which must be processed efficiently at large scale.



P. Seshadri, A. Narayan, S. Mahadevan. “Effectively Subsampled Quadratures for Least Squares Polynomial Approximations,” In SIAM/ASA Journal on Uncertainty Quantification, pp. 1003--1023. Jan, 2017.

ABSTRACT

This paper proposes a new deterministic sampling strategy for constructing polynomial chaos approximations for expensive physics simulation models. The proposed approach, effectively subsampled quadratures involves sparsely subsampling an existing tensor grid using QR column pivoting. For polynomial interpolation using hyperbolic or total order sets, we then solve the following square least squares problem. For polynomial approximation, we use a column pruning heuristic that removes columns based on the highest total orders and then solves the tall least squares problem. While we provide bounds on the condition number of such tall submatrices, it is difficult to ascertain how column pruning effects solution accuracy as this is problem specific. We conclude with numerical experiments on an analytical function and a model piston problem that show the efficacy of our approach compared with randomized subsampling. We also show an example where this method fails.



J. Tate, K. Gillette, B. Burton, W. Good, J. Coll-Font, D. Brooks, R. MacLeod. “Analyzing Source Sampling to Reduce Error in ECG Forward Simulations,” In Computing in Cardiology, Vol. 44, 2017.

ABSTRACT

A continuing challenge in validating ECG Imaging is the persistent error in the associated forward problem observed in experimental studies. One possible cause of error is insufficient representation of the cardiac sources, which is often measured from only the ventricular epicardium, ignoring the endocardium and the atria. We hypothesize that measurements that completely cover the heart are required for accurate forward solutions. In this study, we used simulated and measured cardiac potentials to test the effect of different levels of sampling on the forward simulation. We found that omitting source samples on the atria increases the peak RMS error by a mean of 464 μV when compared the the fully sampled cardiac surface. Increasing the sampling on the atria in stages reduced the average error of the forward simulation proportionally to the number of additional samples and revealed some strategies may reduce error with fewer samples, such as adding samples to the AV plane and the atrial roof. Based on these results, we can design a sampling strategy to use in future validation studies.



W.Thevathasan, B. Debu, T. Aziz, B. R. Bloem, C. Blahak, C. Butson, V. Czernecki, T. Foltynie, V. Fraix, D. Grabli, C. Joint, A. M. Lozano, M. S. Okun, J. Ostrem, N. Pavese, C. Schrader, C. H. Tai, J. K. Krauss, E. Moro. “Pedunculopontine nucleus deep brain stimulation in Parkinson's disease: A clinical review,” In Movement Disorders, Vol. 33, No. 1, pp. 10--20. 2017.
ISSN: 1531-8257
DOI: 10.1002/mds.27098

ABSTRACT

Pedunculopontine nucleus region deep brain stimulation (DBS) is a promising but experimental therapy for axial motor deficits in Parkinson's disease (PD), particularly gait freezing and falls. Here, we summarise the clinical application and outcomes reported during the past 10 years. The published dataset is limited, comprising fewer than 100 cases. Furthermore, there is great variability in clinical methodology between and within surgical centers. The most common indication has been severe medication refractory gait freezing (often associated with postural instability). Some patients received lone pedunculopontine nucleus DBS (unilateral or bilateral) and some received costimulation of the subthalamic nucleus or internal pallidum. Both rostral and caudal pedunculopontine nucleus subregions have been targeted. However, the spread of stimulation and variance in targeting means that neighboring brain stem regions may be implicated in any response. Low stimulation frequencies are typically employed (20-80 Hertz). The fluctuating nature of gait freezing can confound programming and outcome assessments. Although firm conclusions cannot be drawn on therapeutic efficacy, the literature suggests that medication refractory gait freezing and falls can improve. The impact on postural instability is unclear. Most groups report a lack of benefit on gait or limb akinesia or dopaminergic medication requirements. The key question is whether pedunculopontine nucleus DBS can improve quality of life in PD. So far, the evidence supporting such an effect is minimal. Development of pedunculopontine nucleus DBS to become a reliable, established therapy would likely require a collaborative effort between experienced centres to clarify biomarkers predictive of response and the optimal clinical methodology.



W. Usher, J. Amstutz, C. Brownlee, A. Knoll, I. Wald . “Progressive CPU Volume Rendering with Sample Accumulation,” In Eurographics Symposium on Parallel Graphics and Visualization, Edited by Alexandru Telea and Janine Bennett, The Eurographics Association, 2017.
ISBN: 978-3-03868-034-5
ISSN: 1727-348X
DOI: 10.2312/pgv.20171090

ABSTRACT

We present a new method for progressive volume rendering by accumulating object-space samples over successively rendered frames. Existing methods for progressive refinement either use image space methods or average pixels over frames, which can blur features or integrate incorrectly with respect to depth. Our approach stores samples along each ray, accumulates new samples each frame into a buffer, and progressively interleaves and integrates these samples. Though this process requires additional memory, it ensures interactivity and is well suited for CPU architectures with large memory and cache. This approach also extends well to distributed rendering in cluster environments. We implement this technique in Intel's open source OSPRay CPU ray tracing framework and demonstrate that it is particularly useful for rendering volumetric data with costly sampling functions.



W. Usher, P. Klacansky, F. Federer, P. T. Bremer, A. Knoll, J. Yarch, A. Angelucci, V. Pascucci. “A Virtual Reality Visualization Tool for Neuron Tracing,” In IEEE Transactions on Visualization and Computer Graphics, IEEE, 2017.
ISSN: 1077-2626
DOI: 10.1109/TVCG.2017.2744079

ABSTRACT

Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.



Y. Wan, C. Hansen. “Uncertainty Footprint: Visualization of Nonuniform Behavior of Iterative Algorithms Applied to 4D Cell Tracking,” In Computer Graphics Forum, Wiley, 2017.

ABSTRACT

Research on microscopy data from developing biological samples usually requires tracking individual cells over time. When cells are three-dimensionally and densely packed in a time-dependent scan of volumes, tracking results can become unreliable and uncertain. Not only are cell segmentation results often inaccurate to start with, but it also lacks a simple method to evaluate the tracking outcome. Previous cell tracking methods have been validated against benchmark data from real scans or artificial data, whose ground truth results are established by manual work or simulation. However, the wide variety of real-world data makes an exhaustive validation impossible. Established cell tracking tools often fail on new data, whose issues are also difficult to diagnose with only manual examinations. Therefore, data-independent tracking evaluation methods are desired for an explosion of microscopy data with increasing scale and resolution. In this paper, we propose the uncertainty footprint, an uncertainty quantification and visualization technique that examines nonuniformity at local convergence for an iterative evaluation process on a spatial domain supported by partially overlapping bases. We demonstrate that the patterns revealed by the uncertainty footprint indicate data processing quality in two algorithms from a typical cell tracking workflow – cell identification and association. A detailed analysis of the patterns further allows us to diagnose issues and design methods for improvements. A 4D cell tracking workflow equipped with the uncertainty footprint is capable of self diagnosis and correction for a higher accuracy than previous methods whose evaluation is limited by manual examinations.



Y. Wan, H. Otsuna, H. A. Holman, B. Bagley, M. Ito, A. K. Lewis, M. Colasanto, G. Kardon, K. Ito, C. Hansen. “FluoRender: joint freehand segmentation and visualization for many-channel fluorescence data analysis,” In BMC Bioinformatics, Vol. 18, No. 1, Springer Nature, May, 2017.
DOI: 10.1186/s12859-017-1694-9

ABSTRACT

Background:
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations.

Results:
Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender.

Conclusion:
The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.



L. Yang, A. Narayan, P. Wang. “Sequential data assimilation with multiple nonlinear models and applications to subsurface flow,” In Journal of Computational Physics, Vol. 346, pp. 356--368. Oct, 2017.
ISSN: 0021-9991
DOI: 10.1016/j.jcp.2017.06.026

ABSTRACT

Complex systems are often described with competing models. Such divergence of interpretation on the system may stem from model fidelity, mathematical simplicity, and more generally, our limited knowledge of the underlying processes. Meanwhile, available but limited observations of system state could further complicates one's prediction choices. Over the years, data assimilation techniques, such as the Kalman filter, have become essential tools for improved system estimation by incorporating both models forecast and measurement; but its potential to mitigate the impacts of aforementioned model-form uncertainty has yet to be developed. Based on an earlier study of Multi-model Kalman filter, we propose a novel framework to assimilate multiple models with observation data for nonlinear systems, using extended Kalman filter, ensemble Kalman filter and particle filter, respectively. Through numerical examples of subsurface flow, we demonstrate that the new assimilation framework provides an effective and improved forecast of system behaviour.


2016


K. Aras B. Burton, D. Swenson, R.S. MacLeod. “Spatial organization of acute myocardial ischemia,” In Journal of Electrocardiology, Vol. 49, No. 3, Elsevier, pp. 323–336. May, 2016.

ABSTRACT

Introduction
Myocardial ischemia is a pathological condition initiated by supply and demand imbalance of the blood to the heart. Previous studies suggest that ischemia originates in the subendocardium, i.e., that nontransmural ischemia is limited to the subendocardium. By contrast, we hypothesized that acute myocardial ischemia is not limited to the subendocardium and sought to document its spatial distribution in an animal preparation. The goal of these experiments was to investigate the spatial organization of ischemia and its relationship to the resulting shifts in ST segment potentials during short episodes of acute ischemia.

Methods
We conducted acute ischemia studies in open-chest canines (N = 19) and swines (N = 10), which entailed creating carefully controlled ischemia using demand, supply or complete occlusion ischemia protocols and recording intramyocardial and epicardial potentials. Elevation of the potentials at 40% of the ST segment between the J-point and the peak of the T-wave (ST40%) provided the metric for local ischemia. The threshold for ischemic ST segment elevations was defined as two standard deviations away from the baseline values.

Results
The relative frequency of occurrence of acute ischemia was higher in the subendocardium (78% for canines and 94% for swines) and the mid-wall (87% for canines and 97% for swines) in comparison with the subepicardium (30% for canines and 22% for swines). In addition, acute ischemia was seen arising throughout the myocardium (distributed pattern) in 87% of the canine and 94% of the swine episodes. Alternately, acute ischemia was seen originating only in the subendocardium (subendocardial pattern) in 13% of the canine episodes and 6% of the swine episodes (p < 0.05).

Conclusions
Our findings suggest that the spatial distribution of acute ischemia is a complex phenomenon arising throughout the myocardial wall and is not limited to the subendocardium.



P.R. Atkins, S.Y. Elhabian, P. Agrawal, M.D. Harris, R.T. Whitaker, J.A. Weiss, C.L. Peters, A.E. Anderson. “Quantitative comparison of cortical bone thickness using correspondence-based shape modeling in patients with cam femoroacetabular impingement,” In Journal of Orthopaedic Research, Wiley-Blackwell, Nov, 2016.
DOI: 10.1002/jor.23468

ABSTRACT

The proximal femur is abnormally shaped in patients with cam-type femoroacetabular impingement (FAI). Impingement
may elicit bone remodeling at the proximal femur, causing increases in cortical bone thickness. We used correspondence-based shape modeling to quantify and compare cortical thickness between cam patients and controls for the location of the cam lesion and the proximal femur. Computed tomography images were segmented for 45 controls and 28 cam-type FAI patients. The segmentations were input to a correspondence-based shape model to identify the region of the cam lesion. Median cortical thickness data over the region of the cam lesion and the proximal femur were compared between mixed-gender and gender-specific groups. Median [interquartile range] thickness was significantly greater in FAI patients than controls in the cam lesion (1.47 [0.64] vs. 1.13 [0.22] mm, respectively; p < 0.001) and proximal femur (1.28 [0.30] vs. 0.97 [0.22] mm, respectively; p < 0.001). Maximum thickness in the region of the cam lesion was more anterior and less lateral (p < 0.001) in FAI patients. Male FAI patients had increased thickness compared to male controls in the cam lesion (1.47 [0.72] vs. 1.10 [0.19] mm, respectively; p < 0.001) and proximal femur (1.25 [0.29] vs. 0.94 [0.17] mm, respectively; p < 0.001). Thickness was not significantly different between male and female controls. Clinical significance: Studies of non-pathologic cadavers have provided guidelines regarding safe surgical resection depth for FAI patients. However, our results suggest impingement induces cortical thickening in cam patients, which may strengthen the proximal femur. Thus, these previously established guidelines may be too conservative.



J.L. Baker, J. Ryou, X.F. Wei, C.R. Butson, N.D. Schiff, K.P. Purpura. “Robust modulation of arousal regulation, performance, and frontostriatal activity through central thalamic deep brain stimulation in healthy nonhuman primates,” In Journal of Neurophysiology, Vol. 116, No. 5, American Physiological Society, pp. 2383--2404. Aug, 2016.
DOI: 10.1152/jn.01129.2015

ABSTRACT

The central thalamus (CT) is a key component of the brain-wide network underlying arousal regulation and sensory-motor integration during wakefulness in the mammalian brain. Dysfunction of the CT, typically a result of severe brain injury (SBI), leads to long-lasting impairments in arousal regulation and subsequent deficits in cognition. Central thalamic deep brain stimulation (CT-DBS) is proposed as a therapy to reestablish and maintain arousal regulation to improve cognition in select SBI patients. However, a mechanistic understanding of CT-DBS and an optimal method of implementing this promising therapy are unknown. Here we demonstrate in two healthy nonhuman primates (NHPs), Macaca mulatta, that location-specific CT-DBS improves performance in visuomotor tasks and is associated with physiological effects consistent with enhancement of endogenous arousal. Specifically, CT-DBS within the lateral wing of the central lateral nucleus and the surrounding medial dorsal thalamic tegmental tract (DTTm) produces a rapid and robust modulation of performance and arousal, as measured by neuronal activity in the frontal cortex and striatum. Notably, the most robust and reliable behavioral and physiological responses resulted when we implemented a novel method of CT-DBS that orients and shapes the electric field within the DTTm using spatially separated DBS leads. Collectively, our results demonstrate that selective activation within the DTTm of the CT robustly regulates endogenous arousal and enhances cognitive performance in the intact NHP; these findings provide insights into the mechanism of CT-DBS and further support the development of CT-DBS as a therapy for reestablishing arousal regulation to support cognition in SBI patients.



M. Barjatia, T. Tasdizen, B. Song, K.M. Golden. “Network modeling of Arctic melt ponds,” In Cold Regions Science and Technology, Vol. 124, Elsevier BV, pp. 40--53. April, 2016.
DOI: 10.1016/j.coldregions.2015.11.019

ABSTRACT

The recent precipitous losses of summer Arctic sea ice have outpaced the projections of most climate models. A number of efforts to improve these models have focused in part on a more accurate accounting of sea ice albedo or reflectance. In late spring and summer, the albedo of the ice pack is determined primarily by melt ponds that form on the sea ice surface. The transition of pond configurations from isolated structures to interconnected networks is critical in allowing the lateral flow of melt water toward drainage features such as large brine channels, fractures, and seal holes, which can alter the albedo by removing the melt water. Moreover, highly connected ponds can influence the formation of fractures and leads during ice break-up. Here we develop algorithmic techniques for mapping photographic images of melt ponds onto discrete conductance networks which represent the geometry and connectedness of pond configurations. The effective conductivity of the networks is computed to approximate the ease of lateral flow. We implement an image processing algorithm with mathematical morphology operations to produce a conductance matrix representation of the melt ponds. Basic clustering and edge elimination, using undirected graphs, are then used to map the melt pond connections and reduce the conductance matrix to include only direct connections. The results for images taken during different times of the year are visually inspected and the number of mislabels is used to evaluate performance.



J. Beckvermit, T. Harman, C. Wight, M. Berzins. “Physical Mechanisms of DDT in an Array of PBX 9501 Cylinders Initiation Mechanisms of DDT,” SCI Institute, April, 2016.

ABSTRACT

The Deflagration to Detonation Transition (DDT) in large arrays (100s) of explosive devices is investigated using large-scale computer simulations running the Uintah Computational Framework. Our particular interest is understanding the fundamental physical mechanisms by which convective deflagration of cylindrical PBX 9501 devices can transition to a fully-developed detonation in transportation accidents. The simulations reveal two dominant mechanisms, inertial confinement and Impact to Detonation Transition. In this study we examined the role of physical spacing of the cylinders and how it influenced the initiation of DDT.