2019
F. Wang, I. Wald, Q. Wu, W. Usher, C. R. Johnson.
CPU Isosurface Ray Tracing of Adaptive Mesh Refinement Data, In IEEE Transactions on Visualization and Computer Graphics, Vol. 25, No. 1, IEEE, pp. 1142-1151. Jan, 2019.
DOI: 10.1109/TVCG.2018.2864850
Adaptive mesh refinement (AMR) is a key technology for large-scale simulations that allows for adaptively changing the simulation mesh resolution, resulting in significant computational and storage savings. However, visualizing such AMR data poses a significant challenge due to the difficulties introduced by the hierarchical representation when reconstructing continuous field values. In this paper, we detail a comprehensive solution for interactive isosurface rendering of block-structured AMR data. We contribute a novel reconstruction strategy—the octant method—which is continuous, adaptive and simple to implement. Furthermore, we present a generally applicable hybrid implicit isosurface ray-tracing method, which provides better rendering quality and performance than the built-in sampling-based approach in OSPRay. Finally, we integrate our octant method and hybrid isosurface geometry into OSPRay as a module, providing the ability to create high-quality interactive visualizations combining volume and isosurface representations of BS-AMR data. We evaluate the rendering performance, memory consumption and quality of our method on two gigascale block-structured AMR datasets.
F. Wang, I. Wald,, C.R. Johnson.
Interactive Rendering of Large-Scale Volumes on Multi-Core CPUs , In 2019 IEEE 9th Symposium on Large Data Analysis and Visualization (LDAV), pp. 27--36. 2019.
DOI: 10.1109/LDAV48142.2019.8944267
Recent advances in large-scale simulations have resulted in volume data of increasing size that stress the capabilities of off-the-shelf visualization tools. Users suffer from long data loading times, because large data must be read from disk into memory prior to rendering the first frame. In this work, we present a volume renderer that enables high-fidelity interactive visualization of large volumes on multi-core CPU architectures. Compared to existing CPU-based visualization frameworks, which take minutes or hours for data loading, our renderer allows users to get a data overview in seconds. Using a hierarchical representation of raw volumes and ray-guided streaming, we reduce the data loading time dramatically and improve the user's interactivity experience. We also examine system design choices with respect to performance and scalability. Specifically, we evaluate the hierarchy generation time, which has been ignored in most prior work, but which can become a significant bottleneck as data scales. Finally, we create a module on top of the OSPRay ray tracing framework that is ready to be integrated into general-purpose visualization frameworks such as Paraview.
A. Warner, J. Tate, B. Burton,, C.R. Johnson.
A High-Resolution Head and Brain Computer Model for Forward and Inverse EEG Simulation, In bioRxiv, Cold Spring Harbor Laboratory, Feb, 2019.
DOI: 10.1101/552190
To conduct computational forward and inverse EEG studies of brain electrical activity, researchers must construct realistic head and brain computer models, which is both challenging and time consuming. The availability of realistic head models and corresponding imaging data is limited in terms of imaging modalities and patient diversity. In this paper, we describe a detailed head modeling pipeline and provide a high-resolution, multimodal, open-source, female head and brain model. The modeling pipeline specifically outlines image acquisition, preprocessing, registration, and segmentation; three-dimensional tetrahedral mesh generation; finite element EEG simulations; and visualization of the model and simulation results. The dataset includes both functional and structural images and EEG recordings from two high-resolution electrode configurations. The intermediate results and software components are also included in the dataset to facilitate modifications to the pipeline. This project will contribute to neuroscience research by providing a high-quality dataset that can be used for a variety of applications and a computational pipeline that may help researchers construct new head models more efficiently.
L. Zhou, D. Weiskopf, C. R. Johnson.
Perceptually guided contrast enhancement based on viewing distance, In Journal of Computer Languages, Vol. 55, Elsevier, pp. 100911. 2019.
ISSN: 2590-1184
DOI: https://doi.org/10.1016/j.cola.2019.100911
We propose an image-space contrast enhancement method for color-encoded visualization. The contrast of an image is enhanced through a perceptually guided approach that interfaces with the user with a single and intuitive parameter of the virtual viewing distance. To this end, we analyze a multiscale contrast model of the input image and test the visibility of bandpass images of all scales at a virtual viewing distance. By adapting weights of bandpass images with a threshold model of spatial vision, this image-based method enhances contrast to compensate for contrast loss caused by viewing the image at a certain distance. Relevant features in the color image can be further emphasized by the user using overcompensation. The weights can be assigned with a simple band-based approach, or with an efficient pixel-based approach that reduces ringing artifacts. The method is efficient and can be integrated into any visualization tool as it is a generic image-based post-processing technique. Using highly diverse datasets, we show the usefulness of perception compensation across a wide range of typical visualizations.
L. Zhou, R. Netzel, D. Weiskopf,, C. R. Johnson.
Spectral Visualization Sharpening, In ACM Symposium on Applied Perception 2019, No. 18, Association for Computing Machinery, pp. 1--9. 2019.
DOI: https://doi.org/10.1145/3343036.3343133
In this paper, we propose a perceptually-guided visualization sharpening technique.We analyze the spectral behavior of an established comprehensive perceptual model to arrive at our approximated model based on an adapted weighting of the bandpass images from a Gaussian pyramid. The main benefit of this approximated model is its controllability and predictability for sharpening color-mapped visualizations. Our method can be integrated into any visualization tool as it adopts generic image-based post-processing, and it is intuitive and easy to use as viewing distance is the only parameter. Using highly diverse datasets, we show the usefulness of our method across a wide range of typical visualizations.
2018
O. Abdullah, L. Dai, J. Tippetts, B. Zimmerman, A. Van Hoek, S. Joshi, E. Hsu.
High resolution and high field diffusion MRI in the visual system of primates (P3.086), In Neurology, Vol. 90, No. 15 Supplement, Wolters Kluwer Health, Inc, 2018.
ISSN: 0028-3878
Objective: Establishing a primate multiscale genetic brain network linking key microstructural brain components to social behavior remains an elusive goal.
Background: Diffusion MRI, which quantifies the magnitude and anisotropy of water diffusion in brain tissues, offers unparalleled opportunity to link the macroconnectome (resolution of ~0.5mm) to histological-based microconnectome at synaptic resolution.
Design/Methods: We tested the hypothesis that the simplest (and most clinically-used) reconstruction technique (known as diffusion tensor imaging, DTI) will yield similar brain connectivity patterns in the visual system (from optic chiasm to visual cortex) compared to more sophisticated and accurate reconstruction methods including diffusion spectrum imaging (DSI), q-ball imaging (QBI), and generalized q-sampling imaging. We obtained high resolution diffusion MRI data on ex vivo brain from Macaca fascicularis: MRI 7T, resolution 0.5 mm isotropic, 515 diffusion volumes up to b-value (aka diffusion sensitivity) of 40,000 s/mm2 with scan time ~100 hrs.
Results: Tractography results show that despite the limited ability of DTI to resolve crossing fibers at the optic chiasm, DTI-based tracts mapped to the known projections of layers in lateral geniculate nucleus and to the primary visual cortex. The other reconstructions were superior in localized regions for resolving crossing regions.
Conclusions: In conclusion, despite its simplifying assumptions, DTI-based fiber tractography can be used to generate accurate brain connectivity maps that conform to established neuroanatomical features in the visual system.
K. A. Aiello, S. P. Ponnapalli, O. Alter.
Mathematically universal and biologically consistent astrocytoma genotype encodes for transformation and predicts survival phenotype, In APL Bioengineering, Vol. 2, No. 3, AIP Publishing, pp. 031909. September, 2018.
DOI: 10.1063/1.5037882
DNA alterations have been observed in astrocytoma for decades. A copy-number genotype predictive of a survival phenotype was only discovered by using the generalized singular value decomposition (GSVD) formulated as a comparative spectral decomposition. Here, we use the GSVD to compare whole-genome sequencing (WGS) profiles of patient-matched astrocytoma and normal DNA. First, the GSVD uncovers a genome-wide pattern of copy-number alterations, which is bounded by patterns recently uncovered by the GSVDs of microarray-profiled patient-matched glioblastoma (GBM) and, separately, lower-grade astrocytoma and normal genomes. Like the microarray patterns, the WGS pattern is correlated with an approximately one-year median survival time. By filling in gaps in the microarray patterns, the WGS pattern reveals that this biologically consistent genotype encodes for transformation via the Notch together with the Ras and Shh pathways. Second, like the GSVDs of the microarray profiles, the GSVD of the WGS profiles separates the tumor-exclusive pattern from normal copy-number variations and experimental inconsistencies. These include the WGS technology-specific effects of guanine-cytosine content variations across the genomes that are correlated with experimental batches. Third, by identifying the biologically consistent phenotype among the WGS-profiled tumors, the GBM pattern proves to be a technology-independent predictor of survival and response to chemotherapy and radiation, statistically better than the patient's age and tumor's grade, the best other indicators, and MGMT promoter methylation and IDH1 mutation. We conclude that by using the complex structure of the data, comparative spectral decompositions underlie a mathematically universal description of the genotype-phenotype relations in cancer that other methods miss.
D. N. Anderson, B. Osting, J. Vorwerk, A. D Dorval, C. R Butson.
Optimized programming algorithm for cylindrical and directional deep brain stimulation electrodes, In Journal of Neural Engineering, Vol. 15, No. 2, pp. 026005. 2018.
Objective. Deep brain stimulation (DBS) is a growing treatment option for movement and psychiatric disorders. As DBS technology moves toward directional leads with increased numbers of smaller electrode contacts, trial-and-error methods of manual DBS programming are becoming too time-consuming for clinical feasibility. We propose an algorithm to automate DBS programming in near real-time for a wide range of DBS lead designs. Approach. Magnetic resonance imaging and diffusion tensor imaging are used to build finite element models that include anisotropic conductivity. The algorithm maximizes activation of target tissue and utilizes the Hessian matrix of the electric potential to approximate activation of neurons in all directions. We demonstrate our algorithm's ability in an example programming case that targets the subthalamic nucleus (STN) for the treatment of Parkinson's disease for three lead designs: the Medtronic 3389 (four cylindrical contacts), the direct STNAcute (two cylindrical contacts, six directional contacts), and the Medtronic-Sapiens lead (40 directional contacts). Main results. The optimization algorithm returns patient-specific contact configurations in near real-time—less than 10 s for even the most complex leads. When the lead was placed centrally in the target STN, the directional leads were able to activate over 50% of the region, whereas the Medtronic 3389 could activate only 40%. When the lead was placed 2 mm lateral to the target, the directional leads performed as well as they did in the central position, but the Medtronic 3389 activated only 2.9% of the STN. Significance. This DBS programming algorithm can be applied to cylindrical electrodes as well as novel directional leads that are too complex with modern technology to be manually programmed. This algorithm may reduce clinical programming time and encourage the use of directional leads, since they activate a larger volume of the target area than cylindrical electrodes in central and off-target lead placements.
D. N. Anderson, G. Duffley, J. Vorwerk, A. Dorval, C. R. Butson.
Anodic Stimulation Misunderstood: Preferential Activation of Fiber Orientations with Anodic Waveforms in Deep Brain Stimulation, In Journal of Neural Engineering, IOP Publishing, Oct, 2018.
DOI: 10.1088/1741-2552/aae590
Objective: During deep brain stimulation (DBS), it is well understood that extracellular cathodic stimulation can cause activation of passing axons. Activation can be predicted from the second derivative of the electric potential along an axon, which depends on axonal orientation with respect to the stimulation source. We hypothesize that fiber orientation influences activation thresholds and that fiber orientations can be selectively targeted with DBS waveforms. Approach: We used bioelectric field and multicompartment NEURON models to explore preferential activation based on fiber orientation during monopolar or bipolar stimulation. Preferential fiber orientation was extracted from the principal eigenvectors and eigenvalues of the Hessian matrix of the electric potential. We tested cathodic, anodic, and charge-balanced pulses to target neurons based on fiber orientation in general and clinical scenarios. Main Results: Axons passing the DBS lead have positive second derivatives around a cathode, whereas orthogonal axons have positive second derivatives around an anode, as indicated by the Hessian. Multicompartment NEURON models confirm that passing fibers are activated by cathodic stimulation, and orthogonal fibers are activated by anodic stimulation. Additionally, orthogonal axons have lower thresholds compared to passing axons. In a clinical scenario, fiber pathways associated with therapeutic benefit can be targeted with anodic stimulation at 50% lower stimulation amplitudes. Significance: Fiber orientations can be selectively targeted with simple changes to the stimulus waveform. Anodic stimulation preferentially activates orthogonal fibers, approaching or leaving the electrode, at lower thresholds for similar therapeutic benefit in DBS with decreased power consumption.
G.A. Ateshian, J.J. Shim, S.A. Maas, J.A. Weiss.
Finite Element Framework for Computational Fluid Dynamics in FEBio, In Journal of Biomechanical Engineering, Vol. 140, No. 2, ASME International, pp. 021001. Jan, 2018.
DOI: 10.1115/1.4038716
The mechanics of biological fluids is an important topic in biomechanics, often requiring the use of computational tools to analyze problems with realistic geometries and material properties. This study describes the formulation and implementation of a finite element framework for computational fluid dynamics (CFD) in FEBio, a free software designed to meet the computational needs of the biomechanics and biophysics communities. This formulation models nearly incompressible flow with a compressible isothermal formulation that uses a physically realistic value for the fluid bulk modulus. It employs fluid velocity and dilatation as essential variables: The virtual work integral enforces the balance of linear momentum and the kinematic constraint between fluid velocity and dilatation, while fluid density varies with dilatation as prescribed by the axiom of mass balance. Using this approach, equal-order interpolations may be used for both essential variables over each element, contrary to traditional mixed formulations that must explicitly satisfy the inf-sup condition. The formulation accommodates Newtonian and non-Newtonian viscous responses as well as inviscid fluids. The efficiency of numerical solutions is enhanced using Broyden's quasi-Newton method. The results of finite element simulations were verified using well-documented benchmark problems as well as comparisons with other free and commercial codes. These analyses demonstrated that the novel formulation introduced in FEBio could successfully reproduce the results of other codes. The analogy between this CFD formulation and standard finite element formulations for solid mechanics makes it suitable for future extension to fluid–structure interactions (FSIs).
D. Ayyagari, N. Ramesh, D. Yatsenko, T. Tasdizen, C. Atria.
Image reconstruction using priors from deep learning, In Medical Imaging 2018: Image Processing, SPIE, March, 2018.
Tomosynthesis, i.e. reconstruction of 3D volumes using projections from a limited perspective is a classical inverse, ill-posed or under constrained problem. Data insufficiency leads to reconstruction artifacts that vary in severity depending on the particular problem, the reconstruction method and also on the object being imaged. Machine learning has been used successfully in tomographic problems where data is insufficient, but the challenge with machine learning is that it introduces bias from the learning dataset. A novel framework to improve the quality of the tomosynthesis reconstruction that limits the learning dataset bias by maintaining consistency with the observed data is proposed. Convolutional Neural Networks (CNN) are embedded as regularizers in the reconstruction process to introduce the expected features and characterstics of the likely imaged object. The minimization of the objective function keeps the solution consistent with the observations and limits the bias introduced by the machine learning regularizers, improving the quality of the reconstruction. The proposed method has been developed and studied in the specific problem of Cone Beam Tomosynthesis Flouroscopy (CBT-fluoroscopy)1 but it is a general framework that can be applied to any image reconstruction problem that is limited by data insufficiency.
M. Berzins.
Nonlinear stability and time step selection for the MPM method, In Computational Particle Mechanics, Jan, 2018.
ISSN: 2196-4386
DOI: 10.1007/s40571-018-0182-y
The Material Point Method (MPM) has been developed from the Particle in Cell (PIC) method over the last 25 years and has proved its worth in solving many challenging problems involving large deformations. Nevertheless there are many open questions regarding the theoretical properties of MPM. For example in while Fourier methods, as applied to PIC may provide useful insight, the non-linear nature of MPM makes it necessary to use a full non-linear stability analysis to determine a stable time step for MPM. In order to begin to address this the stability analysis of Spigler and Vianello is adapted to MPM and used to derive a stable time step bound for a model problem. This bound is contrasted against traditional Speed of sound and CFL bounds and shown to be a realistic stability bound for a model problem.
H. Bhatia, A.G. Gyulassy, V. Lordi, J.E. Pask, V. Pascucci, P.T. Bremer.
TopoMS: Comprehensive topological exploration for molecular and condensed‐matter systems, In Journal of Computational Chemistry, Vol. 39, No. 16, Wiley, pp. 936--952. March, 2018.
DOI: 10.1002/jcc.25181
We introduce TopoMS, a computational tool enabling detailed topological analysis of molecular and condensed‐matter systems, including the computation of atomic volumes and charges through the quantum theory of atoms in molecules, as well as the complete molecular graph. With roots in techniques from computational topology, and using a shared‐memory parallel approach, TopoMS provides scalable, numerically robust, and topologically consistent analysis. TopoMS can be used as a command‐line tool or with a GUI (graphical user interface), where the latter also enables an interactive exploration of the molecular graph. This paper presents algorithmic details of TopoMS and compares it with state‐of‐the‐art tools: Bader charge analysis v1.0 (Arnaldsson et al., 01/11/17) and molecular graph extraction using Critic2 (Otero‐de‐la‐Roza et al., Comput. Phys. Commun. 2014, 185, 1007). TopoMS not only combines the functionality of these individual codes but also demonstrates up to 4× performance gain on a standard laptop, faster convergence to fine‐grid solution, robustness against lattice bias, and topological consistency. TopoMS is released publicly under BSD License. © 2018 Wiley Periodicals, Inc.
A. Bock, E. Axelsson, C. Emmart, M. Kuznetsova, C. Hansen, A. Ynnerman.
OpenSpace: Changing the Narrative of Public Dissemination in Astronomical Visualization from What to How, In IEEE Computer Graphics and Applications, Vol. 38, No. 3, IEEE, pp. 44--57. May, 2018.
DOI: 10.1109/mcg.2018.032421653
We present the development of an open-source software called OpenSpace that bridges the gap between scientific discoveries and public dissemination and thus paves the way for the next generation of science communication and data exploration. We describe how the platform enables interactive presentations of dynamic and time-varying processes by domain experts to the general public. The concepts are demonstrated through four cases: Image acquisitions of the New Horizons and Rosetta spacecraft, the dissemination of space weather phenomena, and the display of high-resolution planetary images. Each case has been presented at public events with great success. These cases highlight the details of data acquisition, rather than presenting the final results, showing the audience the value of supporting the efforts of the scientific discovery.
B.M. Burton, K.K. Aras, W.W. Good, J.D. Tate, B. Zenger, R.S. MacLeod.
A Framework for Image-Based Modeling of Acute Myocardial Ischemia Using Intramurally Recorded Extracellular Potentials, In Annals of Biomedical Engineering, Springer Nature, May, 2018.
DOI: 10.1007/s10439-018-2048-0
The biophysical basis for electrocardiographic evaluation of myocardial ischemia stems from the notion that ischemic tissues develop, with relative uniformity, along the endocardial aspects of the heart. These injured regions of subendocardial tissue give rise to intramural currents that lead to ST segment deflections within electrocardiogram (ECG) recordings. The concept of subendocardial ischemic regions is often used in clinical practice, providing a simple and intuitive description of ischemic injury; however, such a model grossly oversimplifies the presentation of ischemic disease—inadvertently leading to errors in ECG-based diagnoses. Furthermore, recent experimental studies have brought into question the subendocardial ischemia paradigm suggesting instead a more distributed pattern of tissue injury. These findings come from experiments and so have both the impact and the limitations of measurements from living organisms. Computer models have often been employed to overcome the constraints of experimental approaches and have a robust history in cardiac simulation. To this end, we have developed a computational simulation framework aimed at elucidating the effects of ischemia on measurable cardiac potentials. To validate our framework, we simulated, visualized, and analyzed 226 experimentally derived acute myocardial ischemic events. Simulation outcomes agreed both qualitatively (feature comparison) and quantitatively (correlation, average error, and significance) with experimentally obtained epicardial measurements, particularly under conditions of elevated ischemic stress. Our simulation framework introduces a novel approach to incorporating subject-specific, geometric models and experimental results that are highly resolved in space and time into computational models. We propose this framework as a means to advance the understanding of the underlying mechanisms of ischemic disease while simultaneously putting in place the computational infrastructure necessary to study and improve ischemia models aimed at reducing diagnostic errors in the clinic.
B.M. Burton, K.K. Aras, W.W. Good, J.D. Tate, B. Zenger, R.S. MacLeod.
Image-Based Modeling of Acute Myocardial Ischemia Using Experimentally Derived Ischemic Zone Source Representations, In Journal of Electrocardiology, Vol. 51, No. 4, Elsevier BV, pp. 725--733. July, 2018.
DOI: 10.1016/j.jelectrocard.2018.05.005
Background
Computational models of myocardial ischemia often use oversimplified ischemic source representations to simulate epicardial potentials. The purpose of this study was to explore the influence of biophysically justified, subject-specific ischemic zone representations on epicardial potentials.
Methods
We developed and implemented an image-based simulation pipeline, using intramural recordings from a canine experimental model to define subject-specific ischemic regions within the heart. Static epicardial potential distributions, reflective of ST segment deviations, were simulated and validated against measured epicardial recordings.
Results
Simulated epicardial potential distributions showed strong statistical correlation and visual agreement with measured epicardial potentials. Additionally, we identified and described in what way border zone parameters influence epicardial potential distributions during the ST segment.
Conclusion
From image-based simulations of myocardial ischemia, we generated subject-specific ischemic sources that accurately replicated epicardial potential distributions. Such models are essential in understanding the underlying mechanisms of the bioelectric fields that arise during ischemia and are the basis for more sophisticated simulations of body surface ECGs.
M.J.M. Cluitmans, S. Ghimire, J. Dhamala, J. Coll-Font, J.D. Tate, S. Giffard-Roisin, J. Svehlikova, O. Doessel, M.S. Guillem, D.H. Brooks, R.S. Macleod, L. Wang.
P1125 Noninvasive localization of premature ventricular complexes: a research-community-based approach, In EP Europace, Vol. 20, No. Supplement, Oxford University Press, March, 2018.
DOI: 10.1093/europace/euy015.611
Background: Noninvasive localization of premature ventricular complexes (PVCs) to guide ablation therapy is one of the emerging applications of electrocardiographic imaging (ECGI). Because of its increasing clinical use, it is essential to compare the many implementations of ECGI that exist to understand the specific characteristics of each approach.
Objective: Our consortium is a community of researchers aiming to collaborate in the field of ECGI, and to objectively compare and improve methods. Here, we will compare methods to localize the origin of PVCs with ECGI.
Methods: Our consortium hosts a repository of ECGI data on its website. For the current study, participants analysed simulated electrocardiograms from premature beats, freely available on that website. These PVCs were simulated to originate from eight ventricular locations and the resulting body-surface potentials were computed. These body-surface electrocardiograms (and the torso-heart geometry) were then provided to the study participants to apply their ECGI algorithms to determine the origin of the PVCs. Participants could choose freely among four different source models, i.e., representations of the bioelectric fields reconstructed from ECGI: 1) epicardial potentials (POTepi), 2) epicardial & endocardial potentials (POTepi&endo), 3) transmembrane potentials on the endocardium and epicardium (TMPepi&endo) and 4) transmembrame potentials throughout the myocardium (TMPmyo). Participants were free to employ any software implementation of ECGI and were blinded to the ground truth data.
Results: Four research groups submitted 11 entries for this study. The figure shows the localization error between the known and reconstructed origin of each PVC for each submission, categorized per source model. Each colour represents one research group and some groups submitted results using different approaches. These results demonstrate that the variation of accuracy was larger among research groups than among the source models. Most submissions achieved an error below 2 cm, but none performed with a consistent sub-centimetre accuracy.
Conclusion: This study demonstrates a successful community-based approach to study different ECGI methods for PVC localization. The goal was not to rank research groups but to compare both source models and numerical implementations. PVC localization with these methods was not as dependent on the source representation as it was on the implementation of ECGI. Consequently, ECGI validation should not be performed on generic methods, but should be specifically performed for each lab's implementation. The novelty of this study is that it achieves this in the first open, international comparison of approaches using a common set of gold standards. Continued collaborative validation is essential to understand the effect of implementation differences, in order to reach significant improvements and arrive at clinically-relevant sub-centimetre accuracy of PVC localization.
M. Cluitmans, D. H. Brooks, R. MacLeod, O. Dössel, M. S. Guillem, P. M. van Dam, J. Svehlikova, B. He, J. Sapp, L. Wang, L. Bear.
Validation and Opportunities of Electrocardiographic Imaging: From Technical Achievements to Clinical Applications, In Frontiers in Physiology, Vol. 9, Frontiers Media SA, pp. 1305. 2018.
ISSN: 1664-042X
DOI: 10.3389/fphys.2018.01305
Electrocardiographic imaging (ECGI) reconstructs the electrical activity of the heart from a dense array of body-surface electrocardiograms and a patient-specific heart-torso geometry. Depending on how it is formulated, ECGI allows the reconstruction of the activation and recovery sequence of the heart, the origin of premature beats or tachycardia, the anchors/hotspots of re-entrant arrhythmias and other electrophysiological quantities of interest. Importantly, these quantities are directly and noninvasively reconstructed in a digitized model of the patient’s three-dimensional heart, which has led to clinical interest in ECGI’s ability to personalize diagnosis and guide therapy.
Despite considerable development over the last decades, validation of ECGI is challenging. Firstly, results depend considerably on implementation choices, which are necessary to deal with ECGI’s ill-posed character. Secondly, it is challenging to obtain (invasive) ground truth data of high quality. In this review, we discuss the current status of ECGI validation as well as the major challenges remaining for complete adoption of ECGI in clinical practice.
Specifically, showing clinical benefit is essential for the adoption of ECGI. Such benefit may lie in patient outcome improvement, workflow improvement, or cost reduction. Future studies should focus on these aspects to achieve broad adoption of ECGI, but only after the technical challenges have been solved for that specific application/pathology. We propose ‘best’ practices for technical validation and highlight collaborative efforts recently organized in this field. Continued interaction between engineers, basic scientists and physicians remains essential to find a hybrid between technical achievements, pathological mechanisms insights, and clinical benefit, to evolve this powerful technique towards a useful role in clinical practice.
A. Erdemir, P.J. Hunter, G.A. Holzapfel, L.M. Loew, J. Middleton, C.R. Jacobs, P. Nithiarasu, R. Löhner, G. Wei, B.A. Winkelstein, V.H. Barocas, F. Guilak, J.P. Ku, J.L. Hicks, S.L. Delp, M.S. Sacks, J.A. Weiss, G.A. Ateshian, S.A. Maas, A.D. McCulloch, G.C.Y. Peng.
Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research, In Journal of Biomechanical Engineering, Vol. 140, No. 2, ASME International, pp. 024701. Jan, 2018.
DOI: 10.1115/1.4038768
The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate model sharing, and there are corresponding initiatives by the scientific journals. Outside the publishing enterprise, infrastructure to facilitate model sharing in biomechanics exists, and simulation software developers are interested in accommodating the community's needs for sharing of modeling resources. Encouragement for the use of standardized markups, concerns related to quality assurance, acknowledgement of increased burden, and importance of stewardship of resources are noted. In the short-term, it is advisable that the community builds upon recent strategies and experiments with new pathways for continued demonstration of model sharing, its promotion, and its utility. Nonetheless, the need for a long-term strategy to unify approaches in sharing computational models and related resources is acknowledged. Development of a sustainable platform supported by a culture of open model sharing will likely evolve through continued and inclusive discussions bringing all stakeholders at the table, e.g., by possibly establishing a consortium.
M.D. Foote, B. Zimmerman, A. Sawant, S. Joshi.
Real-Time Patient-Specific Lung Radiotherapy Targeting using Deep Learning, In 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands, 2018.
Radiation therapy has presented a need for dynamic tracking of a target tumor volume. Fiducial markers such as implanted gold seeds have been used to gate radiation delivery but the markers are invasive and gating significantly increases treatment time. Pretreatment acquisition of a 4DCT allows for the development of accurate motion estimation for treatment planning. A deep convolutional neural network and subspace motion tracking is used to recover anatomical positions from a single radiograph projection in real-time. We approximate the nonlinear inverse of a diffeomorphic transformation composed with radiographic projection as a deep network that produces subspace coordinates to define the patient-specific deformation of the lungs from a baseline anatomic position. The geometric accuracy of the subspace projections on real patient data is similar to accuracy attained by original image registration between individual respiratory-phase image volumes.