A. Bigelow, S. Drucker, D. Fisher, M.D. Meyer. Reflections on How Designers Design With Data, In Proceedings of the ACM International Conference on Advanced Visual Interfaces (AVI), Note: Awarded Best Paper!, 2014.
Keywords: Visualization, infographics, design practice
J.J.E. Blauer, D. Swenson, K. Higuchi, G. Plank, R. Ranjan, N. Marrouche,, R.S. MacLeod. Sensitivity and Specificity of Substrate Mapping: An In Silico Framework for the Evaluation of Electroanatomical Substrate Mapping Strategies, In Journal of Cardiovascular Electrophysiology, In Journal of Cardiovascular Electrophysiology, Vol. 25, No. 7, Note: Featured on journal cover., pp. 774--780. May, 2014.
Keywords: arrhythmia, computer-based model, electroanatomical mapping, voltage mapping, bipolar electrogram
Topological Methods in Data Analysis and Visualization III, Edited by Peer-Timo Bremer and Ingrid Hotz and Valerio Pascucci and Ronald Peikert, Springer International Publishing, 2014.
M.S. Okun, S.S. Wu, S. Fayad, H. Ward, D. Bowers, C. Rosado, L. Bowen, C. Jacobson, C.R. Butson, K.D. Foote. Acute and Chronic Mood and Apathy Outcomes from a Randomized Study of Unilateral STN and GPi DBS, In PLoS ONE, Vol. 9, No. 12, pp. e114140. December, 2014.
Objective: To study mood and behavioral effects of unilateral and staged bilateral subthalamic nucleus (STN) and globus pallidus internus (GPi) deep brain stimulation (DBS) for Parkinson's disease (PD).
Background: There are numerous reports of mood changes following DBS, however, most have focused on bilateral simultaneous STN implants with rapid and aggressive post-operative medication reduction.
Methods: A standardized evaluation was applied to a subset of patients undergoing STN and GPi DBS and who were also enrolled in the NIH COMPARE study. The Unified Parkinson Disease Rating Scale (UPDRS III), the Hamilton depression (HAM-D) and anxiety rating scales (HAM-A), the Yale-Brown obsessive-compulsive rating scale (YBOCS), the Apathy Scale (AS), and the Young mania rating scale (YMRS) were used. The scales were repeated at acute and chronic intervals. A post-operative strategy of non-aggressive medication reduction was employed.
Results: Thirty patients were randomized and underwent unilateral DBS (16 STN, 14 GPi). There were no baseline differences. The GPi group had a higher mean dopaminergic dosage at 1-year, however the between group difference in changes from baseline to 1-year was not significant. There were no differences between groups in mood and motor outcomes. When combining STN and GPi groups, the HAM-A scores worsened at 2-months, 4-months, 6-months and 1-year when compared with baseline; the HAM-D and YMRS scores worsened at 4-months, 6-months and 1-year; and the UPDRS Motor scores improved at 4-months and 1-year. Psychiatric diagnoses (DSM-IV) did not change. No between group differences were observed in the cohort of bilateral cases.
Conclusions: There were few changes in mood and behavior with STN or GPi DBS. The approach of staging STN or GPi DBS without aggressive medication reduction could be a viable option for managing PD surgical candidates. A study of bilateral DBS and of medication reduction will be required to better understand risks and benefits of a bilateral approach.
B. Chapman, H. Calandra, S. Crivelli, J. Dongarra, J. Hittinger, C.R. Johnson, S.A. Lathrop, V. Sarkar, E. Stahlberg, J.S. Vetter, D. Williams.
ASCAC Workforce Subcommittee Letter, Note: Office of Scientific and Technical Information, DOE ASCAC Committee Report, July, 2014.
Simulation and computing are essential to much of the research conducted at the DOE national laboratories. Experts in the ASCR-relevant Computing Sciences, which encompass a range of disciplines including Computer Science, Applied Mathematics, Statistics and domain sciences, are an essential element of the workforce in nearly all of the DOE national laboratories. This report seeks to identify the gaps and challenges facing DOE with respect to this workforce.
The DOE laboratories provided the committee with information on disciplines in which they experienced workforce gaps. For the larger laboratories, the majority of the cited workforce gaps were in the Computing Sciences. Since this category spans multiple disciplines, it was difficult to obtain comprehensive information on workforce gaps in the available timeframe. Nevertheless, five multi-purpose laboratories provided additional relevant data on recent hiring and retention.
Data on academic coursework was reviewed. Studies on multidisciplinary education in Computational Science and Engineering (CS&E) revealed that, while the number of CS&E courses offered is growing, the overall availability is low and the coursework fails to provide skills for applying CS&E to real-world applications. The number of graduates in different fields within Computer Science (CS) and Computer Engineering (CE) was also reviewed, which confirmed that specialization in DOE areas of interest is less common than in many other areas.
Projections of industry needs and employment figures (mostly for CS and CE) were examined. They indicate a high and increasing demand for graduates in all areas of computing, with little unemployment. This situation will be exacerbated by large numbers of retirees in the coming decade. Further, relatively few US students study toward higher degrees in the Computing Sciences, and those who do are predominantly white and male. As a result of this demographic imbalance, foreign nationals are an increasing fraction of the graduate population and we fail to benefit from including women and underrepresented minorities.
There is already a program that supports graduate education that is tailored to the needs of the DOE laboratories. The Computational Science Graduate Fellowship (CSGF) enables graduates to pursue a multidisciplinary program of education that is coupled with practical experience at the laboratories. It has been demonstrated to be highly effective in both its educational goals and in its ability to supply talent to the laboratories. However, its current size and scope are too limited to solve the workforce problems identified. The committee felt strongly that this proven program should be extended to increase its ability to support the DOE mission.
Since no single program can eliminate the workforce gap, existing recruitment efforts by the laboratories were examined. It was found that the laboratories already make considerable effort to recruit in this area. Although some challenges, such as the inability to match industry compensation, cannot be directly addressed, DOE could develop a roadmap to increase the impact of individual laboratory efforts, to enhance the suitability of existing educational opportunities, to increase the attractiveness of the laboratories, and to attract and sustain a full spectrum of human talent, which includes women and underrepresented minorities.
Over the last decade block-structured adaptive mesh refinement (SAMR) has found increasing use in large, publicly available codes and frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, others have pursued a more general functionality, providing the building blocks for a larger variety of applications. In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that have been in existence for half a decade or more, have a reasonably sized and active user base outside of their home institutions, and are publicly available. The set consists of a mix of SAMR packages and application codes that cover a broad range of scientific domains. We look at their high-level frameworks, their design trade-offs and their approach to dealing with the advent of radical changes in hardware architecture. The codes included in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah.
Keywords: SAMR, BoxLib, Chombo, FLASH, Cactus, Enzo, Uintah
We propose a generic method for the statistical analysis of collections of anatomical shape complexes, namely sets of surfaces that were previously segmented and labeled in a group of subjects. The method estimates an anatomical model, the template complex, that is representative of the population under study. Its shape reflects anatomical invariants within the dataset. In addition, the method automatically places control points near the most variable parts of the template complex. Vectors attached to these points are parameters of deformations of the ambient 3D space. These deformations warp the template to each subject’s complex in a way that preserves the organization of the anatomical structures. Multivariate statistical analysis is applied to these deformation parameters to test for group differences. Results of the statistical analysis are then expressed in terms of deformation patterns of the template complex, and can be visualized and interpreted.
The user needs only to specify the topology of the template complex and the number of control points. The method then automatically estimates the shape of the template complex, the optimal position of control points and deformation parameters. The proposed approach is completely generic with respect to any type of application and well adapted to efficient use in clinical studies, in that it does not require point correspondence across surfaces and is robust to mesh imperfections such as holes, spikes, inconsistent orientation or irregular meshing.
The approach is illustrated with a neuroimaging study of Down syndrome (DS). Results demonstrate that the complex of deep brain structures shows a statistically significant shape difference between control and DS subjects. The deformation-based modelingis able to classify subjects with very high specificity and sensitivity, thus showing important generalization capability even given a low sample size. We show that results remain significant even if the number of control points, and hence the dimension of variables in the statistical model, are drastically reduced. The analysis may even suggest that parsimonious models have an increased statistical performance.
The method has been implemented in the software Deformetrica, which is publicly available at www.deformetrica.org.
Keywords: morphometry, deformation, varifold, anatomy, shape, statistics
S. Elhabian, Y. Gur, C. Vachet, J. Piven, M. Styner, I. Leppert, G.B. Pike, G. Gerig. A Preliminary Study on the Effect of Motion Correction On HARDI Reconstruction, In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.
Keywords: Diffusion MRI, HARDI, motion correction, interpolation
Subject-Motion Correction in HARDI Acquisitions: Choices and Consequences, In Proceeding of the 2014 Joint Annual Meeting ISMRM-ESMRMB, pp. (accepted). 2014.
Unlike anatomical MRI where subject motion can most often be assessed by quick visual quality control, the detection, characterization and evaluation of the impact of motion in diffusion imaging are challenging issues due to the sensitivity of diffusion weighted imaging (DWI) to motion originating from vibration, cardiac pulsation, breathing and head movement. Post-acquisition motion correction is widely performed, e.g. using the open-source DTIprep software [1,2] or TORTOISE , but in particular in high angular resolution diffusion imaging (HARDI), users often do not fully understand the consequences of different types of correction schemes on the final analysis, and whether those choices may introduce confounding factors when comparing populations. Although there is excellent theoretical work on the number of directional DWI and its effect on the quality and crossing fiber resolution of orientation distribution functions (ODF), standard users lack clear guidelines and recommendations in practical settings. This research investigates motion correction using transformation and interpolation of affected DWI directions versus the exclusion of subsets of DWI’s, and its effects on diffusion measurements on the reconstructed fiber orientation diffusion functions and on the estimated fiber orientations. The various effects are systematically studied via a newly developed synthetic phantom and also on real HARDI data.
S. Elhabian, Y. Gur, J. Piven, M. Styner, I. Leppert, G. Bruce Pike, G. Gerig. Motion is inevitable: The impact of motion correction schemes on hardi reconstructions, In Proceedings of the MICCAI 2014 Workshop on Computational Diffusion MRI, September, 2014.
T. Etiene, D. Jonsson, T. Ropinski, C. Scheidegger, J.L.D. Comba, L. G. Nonato, R. M. Kirby, A. Ynnerman,, C. T. Silva. Verifying Volume Rendering Using Discretization Error Analysis, In IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, Vol. 20, No. 1, IEEE, pp. 140-154. January, 2014.
We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages.
J. Fishbaugh, M. Prastawa, G. Gerig, S. Durrleman. Geodesic Regression of Image and Shape Data for Improved Modeling of 4D Trajectories, In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.
T. Fogal, F. Proch, A. Schiewe, O. Hasemann, A. Kempf, J. Krüger. Freeprocessing: Transparent in situ visualization via data interception, In Proceedings of the 14th Eurographics Conference on Parallel Graphics and Visualization, EGPGV, Eurographics Association, 2014.
In situ visualization has become a popular method for avoiding the slowest component of many visualization pipelines: reading data from disk. Most previous in situ work has focused on achieving visualization scalability on par with simulation codes, or on the data movement concerns that become prevalent at extreme scales. In this work, we consider in situ analysis with respect to ease of use and programmability. We describe an abstraction that opens up new applications for in situ visualization, and demonstrate that this abstraction and an expanded set of use cases can be realized without a performance cost.
Z. Fu, H.K. Dasari, M. Berzins, B. Thompson. Parallel Breadth First Search on GPU Clusters, SCI Technical Report, No. UUSCI-2014-002, SCI Institute, University of Utah, 2014.
Fast, scalable, low-cost, and low-power execution of parallel graph algorithms is important for a wide variety of commercial and public sector applications. Breadth First Search (BFS) imposes an extreme burden on memory bandwidth and network communications and has been proposed as a benchmark that may be used to evaluate current and future parallel computers. Hardware trends and manufacturing limits strongly imply that many core devices, such as NVIDIA® GPUs and the Intel® Xeon Phi®, will become central components of such future systems. GPUs are well known to deliver the highest FLOPS/watt and enjoy a very significant memory bandwidth advantage over CPU architectures. Recent work has demonstrated that GPUs can deliver high performance for parallel graph algorithms and, further, that it is possible to encapsulate that capability in a manner that hides the low level details of the GPU architecture and the CUDA language but preserves the high throughput of the GPU. We extend previous research on GPUs and on scalable graph processing on super-computers and demonstrate that a high-performance parallel graph machine can be created using commodity GPUs and networking hardware.
Keywords: GPU cluster, MPI, BFS, graph, parallel graph algorithm