banner pubs

SCI Publications

2017


M. Berzins, D. A. Bonnell, Jr. Cizewski, K. M. Heeger, A.J.G. Hey, C. J. Keane, B. A. Ramsey, K. A. Remington, J.L. Rempe. “Department of Energy, Advanced Scientific Computing Advisory Committee (ASCAC), Subcommittee on LDRD Review Final Report,” May, 2017.



C. Gritton, J. Guilkey, J. Hooper, D. Bedrov, R. M. Kirby, M. Berzins. “Using the material point method to model chemical/mechanical coupling in the deformation of a silicon anode,” In Modelling and Simulation in Materials Science and Engineering, Vol. 25, No. 4, pp. 045005. 2017.

ABSTRACT

The lithiation and delithiation of a silicon battery anode is modeled using the material point method (MPM). The main challenges in modeling this process using the MPM is to simulate stress dependent diffusion coupled with concentration dependent stress within a material that undergoes large deformations. MPM is chosen as the numerical method of choice because of its ability to handle large deformations. A method for modeling diffusion within MPM is described. A stress dependent model for diffusivity and three different constitutive models that fully couple the equations for stress with the equations for diffusion are considered. Verifications tests for the accuracy of the numerical implementations of the models and validation tests with experimental results show the accuracy of the approach. The application of the fully coupled stress diffusion model implemented in MPM is applied to modeling the lithiation and delithiation of silicon nanopillars.



J. K. Holmen, A. Humphrey, D. Sutherland, M. Berzins. “Improving Uintah's Scalability Through the Use of Portable Kokkos-Based Data Parallel Tasks,” In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, New Orleans, LA, USA, PEARC17, No. 27, ACM, New York, NY, USA pp. 27:1--27:8. 2017.
ISBN: 978-1-4503-5272-7
DOI: 10.1145/3093338.3093388

ABSTRACT

The University of Utah's Carbon Capture Multidisciplinary Simulation Center (CCMSC) is using the Uintah Computational Framework to predict performance of a 1000 MWe ultra-supercritical clean coal boiler. The center aims to utilize the Intel Xeon Phi-based DOE systems, Theta and Aurora, through the Aurora Early Science Program by using the Kokkos C++ library to enable node-level performance portability. This paper describes infrastructure advancements and portability improvements made possible by our integration of Kokkos within Uintah. Scalability results are presented that compare serial and data parallel task execution models for a challenging radiative heat transfer calculation, central to the center's predictive boiler simulations. These results demonstrate both good strong-scaling characteristics to 256 Knights Landing (KNL) processors on the NSF Stampede system, and show the KNL-based calculation to compete with prior GPU-based results for the same calculation.

Keywords: Hybrid Parallelism, Knights Landing, Kokkos, MIC, Many-Core, Parallel, Portability, Radiation Modeling, Reverse Monte-Carlo Ray Tracing, Scalability, Stampede, Uintah, Xeon Phi



T.A.J. Ouermi, A. Knoll, R.M. Kirby, M. Berzins. “OpenMP 4 Fortran Modernization of WSM6 for KNL,” In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, New Orleans, LA, USA, PEARC17, No. 12, ACM, New York, NY, USA pp. 12:1--12:8. 2017.
ISBN: 978-1-4503-5272-7
DOI: 10.1145/3093338.3093387

ABSTRACT

Parallel code portability in the petascale era requires modifying existing codes to support new architectures with large core counts and SIMD vector units. OpenMP is a well established and increasingly supported vehicle for portable parallelization. As architectures mature and compiler OpenMP implementations evolve, best practices for code modernization change as well. In this paper, we examine the impact of newer OpenMP features (in particular OMP SIMD) on the Intel Xeon Phi Knights Landing (KNL) architecture, applied in optimizing loops in the single moment 6-class microphysics module (WSM6) in the US Navy's NEPTUNE code. We find that with functioning OMP SIMD constructs, low thread invocation overhead on KNL and reduced penalty for unaligned access compared to previous architectures, one can leverage OpenMP 4 to achieve reasonable scalability with relatively minor reorganization of a production physics code.

Keywords: Knights Landing, openMP, overhead, parallel, thread parallelism, vector parallelism, weather forecasting



Y. Wan, C. Hansen. “Uncertainty Footprint: Visualization of Nonuniform Behavior of Iterative Algorithms Applied to 4D Cell Tracking,” In Computer Graphics Forum, Wiley, 2017.

ABSTRACT

Research on microscopy data from developing biological samples usually requires tracking individual cells over time. When cells are three-dimensionally and densely packed in a time-dependent scan of volumes, tracking results can become unreliable and uncertain. Not only are cell segmentation results often inaccurate to start with, but it also lacks a simple method to evaluate the tracking outcome. Previous cell tracking methods have been validated against benchmark data from real scans or artificial data, whose ground truth results are established by manual work or simulation. However, the wide variety of real-world data makes an exhaustive validation impossible. Established cell tracking tools often fail on new data, whose issues are also difficult to diagnose with only manual examinations. Therefore, data-independent tracking evaluation methods are desired for an explosion of microscopy data with increasing scale and resolution. In this paper, we propose the uncertainty footprint, an uncertainty quantification and visualization technique that examines nonuniformity at local convergence for an iterative evaluation process on a spatial domain supported by partially overlapping bases. We demonstrate that the patterns revealed by the uncertainty footprint indicate data processing quality in two algorithms from a typical cell tracking workflow – cell identification and association. A detailed analysis of the patterns further allows us to diagnose issues and design methods for improvements. A 4D cell tracking workflow equipped with the uncertainty footprint is capable of self diagnosis and correction for a higher accuracy than previous methods whose evaluation is limited by manual examinations.



Y. Wan, H. Otsuna, H. A. Holman, B. Bagley, M. Ito, A. K. Lewis, M. Colasanto, G. Kardon, K. Ito, C. Hansen. “FluoRender: joint freehand segmentation and visualization for many-channel fluorescence data analysis,” In BMC Bioinformatics, Vol. 18, No. 1, Springer Nature, May, 2017.
DOI: 10.1186/s12859-017-1694-9

ABSTRACT

Background:
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations.

Results:
Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender.

Conclusion:
The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.


2016


K. Aras B. Burton, D. Swenson, R.S. MacLeod. “Spatial organization of acute myocardial ischemia,” In Journal of Electrocardiology, Vol. 49, No. 3, Elsevier, pp. 323–336. May, 2016.

ABSTRACT

Introduction
Myocardial ischemia is a pathological condition initiated by supply and demand imbalance of the blood to the heart. Previous studies suggest that ischemia originates in the subendocardium, i.e., that nontransmural ischemia is limited to the subendocardium. By contrast, we hypothesized that acute myocardial ischemia is not limited to the subendocardium and sought to document its spatial distribution in an animal preparation. The goal of these experiments was to investigate the spatial organization of ischemia and its relationship to the resulting shifts in ST segment potentials during short episodes of acute ischemia.

Methods
We conducted acute ischemia studies in open-chest canines (N = 19) and swines (N = 10), which entailed creating carefully controlled ischemia using demand, supply or complete occlusion ischemia protocols and recording intramyocardial and epicardial potentials. Elevation of the potentials at 40% of the ST segment between the J-point and the peak of the T-wave (ST40%) provided the metric for local ischemia. The threshold for ischemic ST segment elevations was defined as two standard deviations away from the baseline values.

Results
The relative frequency of occurrence of acute ischemia was higher in the subendocardium (78% for canines and 94% for swines) and the mid-wall (87% for canines and 97% for swines) in comparison with the subepicardium (30% for canines and 22% for swines). In addition, acute ischemia was seen arising throughout the myocardium (distributed pattern) in 87% of the canine and 94% of the swine episodes. Alternately, acute ischemia was seen originating only in the subendocardium (subendocardial pattern) in 13% of the canine episodes and 6% of the swine episodes (p < 0.05).

Conclusions
Our findings suggest that the spatial distribution of acute ischemia is a complex phenomenon arising throughout the myocardial wall and is not limited to the subendocardium.



P.R. Atkins, S.Y. Elhabian, P. Agrawal, M.D. Harris, R.T. Whitaker, J.A. Weiss, C.L. Peters, A.E. Anderson. “Quantitative comparison of cortical bone thickness using correspondence-based shape modeling in patients with cam femoroacetabular impingement,” In Journal of Orthopaedic Research, Wiley-Blackwell, Nov, 2016.
DOI: 10.1002/jor.23468

ABSTRACT

The proximal femur is abnormally shaped in patients with cam-type femoroacetabular impingement (FAI). Impingement
may elicit bone remodeling at the proximal femur, causing increases in cortical bone thickness. We used correspondence-based shape modeling to quantify and compare cortical thickness between cam patients and controls for the location of the cam lesion and the proximal femur. Computed tomography images were segmented for 45 controls and 28 cam-type FAI patients. The segmentations were input to a correspondence-based shape model to identify the region of the cam lesion. Median cortical thickness data over the region of the cam lesion and the proximal femur were compared between mixed-gender and gender-specific groups. Median [interquartile range] thickness was significantly greater in FAI patients than controls in the cam lesion (1.47 [0.64] vs. 1.13 [0.22] mm, respectively; p < 0.001) and proximal femur (1.28 [0.30] vs. 0.97 [0.22] mm, respectively; p < 0.001). Maximum thickness in the region of the cam lesion was more anterior and less lateral (p < 0.001) in FAI patients. Male FAI patients had increased thickness compared to male controls in the cam lesion (1.47 [0.72] vs. 1.10 [0.19] mm, respectively; p < 0.001) and proximal femur (1.25 [0.29] vs. 0.94 [0.17] mm, respectively; p < 0.001). Thickness was not significantly different between male and female controls. Clinical significance: Studies of non-pathologic cadavers have provided guidelines regarding safe surgical resection depth for FAI patients. However, our results suggest impingement induces cortical thickening in cam patients, which may strengthen the proximal femur. Thus, these previously established guidelines may be too conservative.



J.L. Baker, J. Ryou, X.F. Wei, C.R. Butson, N.D. Schiff, K.P. Purpura. “Robust modulation of arousal regulation, performance, and frontostriatal activity through central thalamic deep brain stimulation in healthy nonhuman primates,” In Journal of Neurophysiology, Vol. 116, No. 5, American Physiological Society, pp. 2383--2404. Aug, 2016.
DOI: 10.1152/jn.01129.2015

ABSTRACT

The central thalamus (CT) is a key component of the brain-wide network underlying arousal regulation and sensory-motor integration during wakefulness in the mammalian brain. Dysfunction of the CT, typically a result of severe brain injury (SBI), leads to long-lasting impairments in arousal regulation and subsequent deficits in cognition. Central thalamic deep brain stimulation (CT-DBS) is proposed as a therapy to reestablish and maintain arousal regulation to improve cognition in select SBI patients. However, a mechanistic understanding of CT-DBS and an optimal method of implementing this promising therapy are unknown. Here we demonstrate in two healthy nonhuman primates (NHPs), Macaca mulatta, that location-specific CT-DBS improves performance in visuomotor tasks and is associated with physiological effects consistent with enhancement of endogenous arousal. Specifically, CT-DBS within the lateral wing of the central lateral nucleus and the surrounding medial dorsal thalamic tegmental tract (DTTm) produces a rapid and robust modulation of performance and arousal, as measured by neuronal activity in the frontal cortex and striatum. Notably, the most robust and reliable behavioral and physiological responses resulted when we implemented a novel method of CT-DBS that orients and shapes the electric field within the DTTm using spatially separated DBS leads. Collectively, our results demonstrate that selective activation within the DTTm of the CT robustly regulates endogenous arousal and enhances cognitive performance in the intact NHP; these findings provide insights into the mechanism of CT-DBS and further support the development of CT-DBS as a therapy for reestablishing arousal regulation to support cognition in SBI patients.



J. Beckvermit, T. Harman, C. Wight, M. Berzins. “Physical Mechanisms of DDT in an Array of PBX 9501 Cylinders Initiation Mechanisms of DDT,” SCI Institute, April, 2016.

ABSTRACT

The Deflagration to Detonation Transition (DDT) in large arrays (100s) of explosive devices is investigated using large-scale computer simulations running the Uintah Computational Framework. Our particular interest is understanding the fundamental physical mechanisms by which convective deflagration of cylindrical PBX 9501 devices can transition to a fully-developed detonation in transportation accidents. The simulations reveal two dominant mechanisms, inertial confinement and Impact to Detonation Transition. In this study we examined the role of physical spacing of the cylinders and how it influenced the initiation of DDT.



J. Beckvermit, T. Harman, C. Wight,, M. Berzins. “Packing Configurations of PBX-9501 Cylinders to Reduce the Probability of a Deflagration to Detonation Transition (DDT),” In Propellants, Explosives, Pyrotechnics, 2016.
ISSN: 1521-4087
DOI: 10.1002/prep.201500331

ABSTRACT

The detonation of hundreds of explosive devices from either a transportation or storage accident is an extremely dangerous event. This paper focuses on identifying ways of packing/storing arrays of explosive cylinders that will reduce the probability of a Deflagration to Detonation Transition (DDT). The Uintah Computational Framework was utilized to predict the conditions necessary for a large scale DDT to occur. The results showed that the arrangement of the explosive cylinders and the number of devices packed in a "box" greatly effects the probability of a detonation.



M. Berzins, J. Beckvermit, T. Harman, A. Bezdjian, A. Humphrey, Q. Meng, J. Schmidt,, C. Wight. “Extending the Uintah Framework through the Petascale Modeling of Detonation in Arrays of High Explosive Devices,” In SIAM Journal on Scientific Computing (Accepted), 2016.

ABSTRACT

The Uintah framework for solving a broad class of fluid-structure interaction problems uses a layered taskgraph approach that decouples the problem specification as a set of tasks from the adaptove runtime system that executes these tasks. Uintah has been developed by using a problem-driven approach that dates back to its inception. Using this approach it is possible to improve the performance of the problem-independent software components to enable the solution of broad classes of problems as well as the driving problem itself. This process is illustrated by a motivating problem that is the computational modeling of the hazards posed by thousands of explosive devices during a Deflagration to Detonation Transition (DDT) that occurred on Highway 6 in Utah. In order to solve this complex fluid-structure interaction problem at the required scale, algorithmic and data structure improvements were needed in a code that already appeared to work well at scale. These transformations enabled scalable runs for our target problem and provided the capability to model the transition to detonation. The performance improvements achieved are shown and the solution to the target problem provides insight as to why the detonation happened, as well as to a possible remediation strategy.



A. Bigelow, R. Choudhury, J. Baumes. “Resonant Laboratory and Candela: Spreading Your Visualization Ideas to the Masses,” In Proceedings of Workshop on Visualization in Practice (VIP '16), Note: Best Paper Award , 2016.

ABSTRACT

Visualization practitioners are constantly developing new, innovative ways to visualize data, but much of the software that practitioners produce does not make it into production in professional systems. To solve this problem, we have developed and informally tested two open source systems. The first, Candela, is a framework and API for creating visualization components for the web that can wrap up new or existing visualizations as needed. Because Candela's API generalizes the inputs to a visualization, we have also developed a system called Resonant Laboratory that makes it possible for novice users to connect arbitrary datasets to Candela visualizations. Together, these systems enable novice users to explore and share their data with the growing library of state-of-the-art visualization techniques.



C. Christensen, S. Liu, G. Scorzelli, J. Lee, P.-T. Bremer, V. Pascucci. “Embedded Domain-Specific Language and Runtime System for Progressive Spatiotemporal Data Analysis and Visualization,” In Symposium on Large Data Analysis and Visualization, IEEE, 2016.

ABSTRACT

As our ability to generate large and complex datasets grows, accessing and processing these massive data collections is increasingly the primary bottleneck in scientific analysis. Challenges include retrieving, converting, resampling, and combining remote and often disparately located data ensembles with only limited support from existing tools. In particular, existing solutions rely predominantly on extensive data transfers or large-scale remote computing resources, both of which are inherently offline processes with long delays and substantial repercussions for any mistakes. Such workflows severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. Here we present an embedded domain-specific language (EDSL) specifically designed for the interactive exploration of largescale, remote data. Our EDSL allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible computation. This system enables, for the first time, interactive remote exploration of massive datasets such as the 7km NASA GEOS-5 Nature Run simulation, which previously have been analyzed only offline or at reduced resolution.



S. Elhabian, C. Vachet, J. Piven, M. Styner, G. Gerig. “Compressive sensing based Q-space resampling for handling fast bulk motion in hardi acquisitions,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 907--910. April, 2016.
DOI: 10.1109/isbi.2016.7493412

ABSTRACT

Diffusion-weighted (DW) MRI has become a widely adopted imaging modality to reveal the underlying brain connectivity. Long acquisition times and/or non-cooperative patients increase the chances of motion-related artifacts. Whereas slow bulk motion results in inter-gradient misalignment which can be handled via retrospective motion correction algorithms, fast bulk motion usually affects data during the application of a single diffusion gradient causing signal dropout artifacts. Common practices opt to discard gradients bearing signal attenuation due to the difficulty of their retrospective correction, with the disadvantage to lose full gradients for further processing. Nonetheless, such attenuation might only affect limited number of slices within a gradient volume. Q-space resampling has recently been proposed to recover corrupted slices while saving gradients for subsequent reconstruction. However, few corrupted gradients are implicitly assumed which might not hold in case of scanning unsedated infants or patients in pain. In this paper, we propose to adopt recent advances in compressive sensing based reconstruction of the diffusion orientation distribution functions (ODF) with under sampled measurements to resample corrupted slices. We make use of Simple Harmonic Oscillator based Reconstruction and Estimation (SHORE) basis functions which can analytically model ODF from arbitrary sampled signals. We demonstrate the impact of the proposed resampling strategy compared to state-of-art resampling and gradient exclusion on simulated intra-gradient motion as well as samples from real DWI data.



S. Elhabian, P. Agrawal, R. Whitaker. “Optimal parameter map estimation for shape representation: A generative approach,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 660--663. April, 2016.
DOI: 10.1109/isbi.2016.7493353

ABSTRACT

Probabilistic label maps are a useful tool for important medical image analysis tasks such as segmentation, shape analysis, and atlas building. Existing methods typically rely on blurred signed distance maps or smoothed label maps to model uncertainties and shape variabilities, which do not conform to any generative model or estimation process, and are therefore suboptimal. In this paper, we propose to learn probabilistic label maps using a generative model on given set of binary label maps. The proposed approach generalizes well on unseen data while simultaneously capturing the variability in the training samples. Efficiency of the proposed approach is demonstrated for consensus generation and shape-based clustering using synthetic datasets as well as left atrial segmentations from late-gadolinium enhancement MRI.



B. Erem, R.M. Orellana, D.E. Hyde, J.M. Peters, F.H. Duffy, P. Stovicek, S.K. Warfield, R.S. MacLeod, G. Tadmor, D.H. Brooks. “Extensions to a manifold learning framework for time-series analysis on dynamic manifolds in bioelectric signals,” In Physical Review E, Vol. 93, No. 4, American Physical Society, apr, 2016.
DOI: 10.1103/physreve.93.042218

ABSTRACT

This paper addresses the challenge of extracting meaningful information from measured bioelectric signals generated by complex, large scale physiological systems such as the brain or the heart. We focus on a combination of the well-known Laplacian eigenmaps machine learning approach with dynamical systems ideas to analyze emergent dynamic behaviors. The method reconstructs the abstract dynamical system phase-space geometry of the embedded measurements and tracks changes in physiological conditions or activities through changes in that geometry. It is geared to extract information from the joint behavior of time traces obtained from large sensor arrays, such as those used in multiple-electrode ECG and EEG, and explore the geometrical structure of the low dimensional embedding of moving time windows of those joint snapshots. Our main contribution is a method for mapping vectors from the phase space to the data domain. We present cases to evaluate the methods, including a synthetic example using the chaotic Lorenz system, several sets of cardiac measurements from both canine and human hearts, and measurements from a human brain.



L.D.J. Fiederer, J. Vorwerk, F. Lucka, M. Dannhauer, S. Yang, M. Dümpelmann, A. Schulze-Bonhage, A. Aertsen, O. Speck, C.H. Wolters, T. Ball. “The role of blood vessels in high-resolution volume conductor head modeling of EEG,” In NeuroImage, Vol. 128, Elsevier, pp. 193--208. March, 2016.
DOI: 10.1016/j.neuroimage.2015.12.041

ABSTRACT

Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required.



C. Gall, S. Schmidt, M.P. Schittkowski, A. Antal, G. Ambrus, W. Paulus, M. Dannhauer, R. Michalik, A. Mante, M. Bola, A. Lux, S. Kropf, S.A. Brandt, B.A. Sabel. “Alternating Current Stimulation for Vision Restoration after Optic Nerve Damage: A Randomized Clinical Trial,” In PLOS ONE, Vol. 11, No. 6, Public Library of Science, pp. e0156134. jun, 2016.
DOI: 10.1371/journal.pone.0156134

ABSTRACT

Background
Vision loss after optic neuropathy is considered irreversible. Here, repetitive transorbital alternating current stimulation (rtACS) was applied in partially blind patients with the goal of activating their residual vision.

Methods
We conducted a multicenter, prospective, randomized, double-blind, sham-controlled trial in an ambulatory setting with daily application of rtACS (n = 45) or sham-stimulation (n = 37) for 50 min for a duration of 10 week days. A volunteer sample of patients with optic nerve damage (mean age 59.1 yrs) was recruited. The primary outcome measure for efficacy was super-threshold visual fields with 48 hrs after the last treatment day and at 2-months follow-up. Secondary outcome measures were near-threshold visual fields, reaction time, visual acuity, and resting-state EEGs to assess changes in brain physiology.

Results
The rtACS-treated group had a mean improvement in visual field of 24.0% which was significantly greater than after sham-stimulation (2.5%). This improvement persisted for at least 2 months in terms of both within- and between-group comparisons. Secondary analyses revealed improvements of near-threshold visual fields in the central 5° and increased thresholds in static perimetry after rtACS and improved reaction times, but visual acuity did not change compared to shams. Visual field improvement induced by rtACS was associated with EEG power-spectra and coherence alterations in visual cortical networks which are interpreted as signs of neuromodulation. Current flow simulation indicates current in the frontal cortex, eye, and optic nerve and in the subcortical but not in the cortical regions.

Conclusion
rtACS treatment is a safe and effective means to partially restore vision after optic nerve damage probably by modulating brain plasticity. This class 1 evidence suggests that visual fields can be improved in a clinically meaningful way.



Y. Gao, M. Zhang, K. Grewen, P. T. Fletcher, G. Gerig. “Image registration and segmentation in longitudinal MRI using temporal appearance modeling,” In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 629--632. April, 2016.
DOI: 10.1109/isbi.2016.7493346