SCI Publications
2018
Image segmentation using disjunctive normal Bayesian shape, appearance models.
F. Mesadi, E. Erdil, M. Cetin, T. Tasdizen, In IEEE Transactions on Medical Imaging, Vol. 37, No. 1, IEEE, pp. 293--305. Jan, 2018.
DOI: 10.1109/tmi.2017.2756929
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. For instance, most active shape and appearance models require landmark points and assume unimodal shape and appearance distributions, and the level set representation does not support construction of local priors. In this paper, we present novel appearance and shape models for image segmentation based on a differentiable implicit parametric shape representation called a disjunctive normal shape model (DNSM). The DNSM is formed by the disjunction of polytopes, which themselves are formed by the conjunctions of half-spaces. The DNSM's parametric nature allows the use of powerful local prior statistics, and its implicit nature removes the need to use landmarks and easily handles topological changes. In a Bayesian inference framework, we model arbitrary shape and appearance distributions using nonparametric density estimations, at any local scale. The proposed local shape prior results in accurate segmentation even when very few training shapes are available, because the method generates a rich set of shape variations by locally combining training samples. We demonstrate the performance of the framework by applying it to both 2-D and 3-D data sets with emphasis on biomedical image segmentation applications.
Q.C. Nguyen, M. Sajjadi, M. McCullough, M. Pham, T.T. Nguyen, W. Yu, H. Meng, M. Wen, F. Li, K.R. Smith, K. Brunisholz, T, Tasdizen.
Neighbourhood looking glass: 360º automated characterisation of the built environment for neighbourhood effects research, In Journal of Epidemiology and Community Health, BMJ, Jan, 2018.
DOI: 10.1136/jech-2017-209456
Background
Neighbourhood quality has been connected with an array of health issues, but neighbourhood research has been limited by the lack of methods to characterise large geographical areas. This study uses innovative computer vision methods and a new big data source of street view images to automatically characterise neighbourhood built environments.
Methods
A total of 430 000 images were obtained using Google's Street View Image API for Salt Lake City, Chicago and Charleston. Convolutional neural networks were used to create indicators of street greenness, crosswalks and building type. We implemented log Poisson regression models to estimate associations between built environment features and individual prevalence of obesity and diabetes in Salt Lake City, controlling for individual-level and zip code-level predisposing characteristics.
Results
Computer vision models had an accuracy of 86%–93% compared with manual annotations. Charleston had the highest percentage of green streets (79%), while Chicago had the highest percentage of crosswalks (23%) and commercial buildings/apartments (59%). Built environment characteristics were categorised into tertiles, with the highest tertile serving as the referent group. Individuals living in zip codes with the most green streets, crosswalks and commercial buildings/apartments had relative obesity prevalences that were 25%–28% lower and relative diabetes prevalences that were 12%–18% lower than individuals living in zip codes with the least abundance of these neighbourhood features.
Conclusion
Neighbourhood conditions may influence chronic disease outcomes. Google Street View images represent an underused data resource for the construction of built environment features.
C. Nobre, M. Streit, A. Lex.
Juniper: A Tree+ Table Approach to Multivariate Graph Visualization, In CoRR, 2018.
Analyzing large, multivariate graphs is an important problem in many domains, yet such graphs are challenging to visualize. In this paper, we introduce a novel, scalable, tree+table multivariate graph visualization technique, which makes many tasks related to multivariate graph analysis easier to achieve. The core principle we follow is to selectively query for nodes or subgraphs of interest and visualize these subgraphs as a spanning tree of the graph. The tree is laid out in a linear layout, which enables us to juxtapose the nodes with a table visualization where diverse attributes can be shown. We also use this table as an adjacency matrix, so that the resulting technique is a hybrid node-link/adjacency matrix technique. We implement this concept in Juniper, and complement it with a set of interaction techniques that enable analysts to dynamically grow, re-structure, and aggregate the tree, as well as change the layout or show paths between nodes. We demonstrate the utility of our tool in usage scenarios for different multivariate networks: a bipartite network of scholars, papers, and citation metrics, and a multitype network of story characters, places, books, etc.
T.A.J, Ouermi, R. M. Kirby,, M. Berzins.
Performance Optimization Strategies for WRF Physics Schemes Used in Weather Modeling, In International Journal of Networking and Computing, Vol. 8, No. 2, IJNC , pp. 301--327. 2018.
DOI: 10.15803/ijnc.8.2_301
Performance optimization in the petascale era and beyond in the exascale era has and will require modifications of legacy codes to take advantage of new architectures with large core counts and SIMD units. The Numerical Weather Prediction (NWP) physics codes considered here are optimized using thread-local structures of arrays (SOA). High-level and low-level optimization strategies are applied to the WRF Single-Moment 6-Class Microphysics Scheme (WSM6) and Global Forecast System (GFS) physics codes used in the NEPTUNE forecast code. By building on previous work optimizing WSM6 on the Intel Knights Landing (KNL), it is shown how to further optimize WMS6 and GFS physics, and GFS radiation on Intel KNL, Haswell, and potentially on future micro-architectures with many cores and SIMD vector units. The optimization techniques used herein employ thread-local structures of arrays (SOA), an OpenMP directive, OMP SIMD, and minor code transformations to enable better utilization of SIMD units, increase parallelism, improve locality, and reduce memory traffic. The optimized versions of WSM6, GFS physics, GFS radiation run 70, 27, and 23 faster (respectively) on KNL and 26, 18 and 30 faster (respectively) on Haswell than their respective original serial versions. Although this work targets WRF physics schemes, the findings are transferable to other performance optimization contexts and provide insight into the optimization of codes with complex physical models for present and near-future architectures with many core and vector units.
B. Peterson, A. Humphrey, J. Holmen T. Harman, M. Berzins, D. Sunderland, H.C. Edwards.
Demonstrating GPU Code Portability and Scalability for Radiative Heat Transfer Computations, In Journal of Computational Science, Elsevier BV, June, 2018.
ISSN: 1877-7503
DOI: 10.1016/j.jocs.2018.06.005
High performance computing frameworks utilizing CPUs, Nvidia GPUs, and/or Intel Xeon Phis necessitate portable and scalable solutions for application developers. Nvidia GPUs in particular present numerous portability challenges with a different programming model, additional memory hierarchies, and partitioned execution units among streaming multiprocessors. This work presents modifications to the Uintah asynchronous many-task runtime and the Kokkos portability library to enable one single codebase for complex multiphysics applications to run across different architectures. Scalability and performance results are shown on multiple architectures for a globally coupled radiation heat transfer simulation, ranging from a single node to 16384 Titan compute nodes.
B. Peterson, A. Humphrey, D. Sunderland, J. Sutherland, T. Saad, H. Dasari, M. Berzins.
Automatic Halo Management for the Uintah GPU-Heterogeneous Asynchronous Many-Task Runtime, In International Journal of Parallel Programming, Dec, 2018.
ISSN: 1573-7640
DOI: 10.1007/s10766-018-0619-1
The Uintah computational framework is used for the parallel solution of partial differential equations on adaptive mesh refinement grids using modern supercomputers. Uintah is structured with an application layer and a separate runtime system. Uintah is based on a distributed directed acyclic graph (DAG) of computational tasks, with a task scheduler that efficiently schedules and executes these tasks on both CPU cores and on-node accelerators. The runtime system identifies task dependencies, creates a task graph prior to the execution of these tasks, automatically generates MPI message tags, and automatically performs halo transfers for simulation variables. Automating halo transfers in a heterogeneous environment poses significant challenges when tasks compute within a few milliseconds, as runtime overhead affects wall time execution, or when simulation variables require large halos spanning most or all of the computational domain, as task dependencies become expensive to process. These challenges are magnified at production scale when application developers require each compute node perform thousands of different halo transfers among thousands simulation variables. The principal contribution of this work is to (1) identify and address inefficiencies that arise when mapping tasks onto the GPU in the presence of automated halo transfers, (2) implement new schemes to reduce runtime system overhead, (3) minimize application developer involvement with the runtime, and (4) show overhead reduction results from these improvements.
S. Petruzza, A. Gyulassy, V. Pascucci,, P. T. Bremer.
A Task-Based Abstraction Layer for User Productivity and Performance Portability in Post-Moore’s Era Supercomputing, In 3RD INTERNATIONAL WORKSHOP ON POST-MOORE’S ERA SUPERCOMPUTING (PMES), 2018.
The proliferation of heterogeneous computing architectures in current and future supercomputing systems dramatically increases the complexity of software development and exacerbates the divergence of software stacks. Currently, task-based runtimes attempt to alleviate these impediments, however their effective use requires expertise and deep integration that does not facilitate reuse and portability. We propose to introduce a task-based abstraction layer that separates the definition of the algorithm from the runtime-specific implementation, while maintaining performance portability.
S. Petruzza, A. Gyulassy, V. Pascucci,, P. T. Bremer.
A Task-Based Abstraction Layer for User Productivity and Performance Portability in Post-Moore’s Era Supercomputing, In 3RD INTERNATIONAL WORKSHOP ON POST-MOORE’S ERA SUPERCOMPUTING (PMES), 2018.
The proliferation of heterogeneous computing architectures in current and future supercomputing systems dramatically increases the complexity of software development and exacerbates the divergence of software stacks. Currently, task-based runtimes attempt to alleviate these impediments, however their effective use requires expertise and deep integration that does not facilitate reuse and portability. We propose to introduce a task-based abstraction layer that separates the definition of the algorithm from the runtime-specific implementation, while maintaining performance portability.
A. Prakosa, H. J. Arevalo, D. Deng, P. M. Boyle, P. P. Nikolov, H. Ashikaga, J. J. E. Blauer, E. Ghafoori, C. J. Park, R. C. Blake, F. T. Han, R. S. MacLeod, H. R. Halperin, D. J. Callans, R. Ranjan, J. Chrispin, S. Nazarian, N. A. Trayanova.
Personalized virtual-heart technology for guiding the ablation of infarct-related ventricular tachycardia, In Nature Biomedical Engineering, Springer Nature America, Inc, September, 2018.
DOI: 10.1038/s41551-018-0282-2
Ventricular tachycardia (VT), which can lead to sudden cardiac death, occurs frequently in patients with myocardial infarction. Catheter-based radio-frequency ablation of cardiac tissue has achieved only modest efficacy, owing to the inaccurate identification of ablation targets by current electrical mapping techniques, which can lead to extensive lesions and to a prolonged, poorly tolerated procedure. Here, we show that personalized virtual-heart technology based on cardiac imaging and computational modelling can identify optimal infarct-related VT ablation targets in retrospective animal (five swine) and human studies (21 patients), as well as in a prospective feasibility study (five patients). We first assessed, using retrospective studies (one of which included a proportion of clinical images with artefacts), the capability of the technology to determine the minimum-size ablation targets for eradicating all VTs. In the prospective study, VT sites predicted by the technology were targeted directly, without relying on prior electrical mapping. The approach could improve infarct-related VT ablation guidance, where accurate identification of patient-specific optimal targets could be achieved on a personalized virtual heart before the clinical procedure.
N. Ramesh, T. Tasdizen.
Semi-supervised learning for cell tracking in microscopy images, In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, April, 2018.
This paper discusses an algorithm for semi-supervised learning to predict cell division and motion in microscopy images. The cells for tracking are detected using extremal region selection and are depicted using a graphical representation. The supervised loss minimizes the error in predictions for the division and move classifiers. The unsupervised loss constrains the incoming links for every detection such that only one of the links is active. Similarly for the outgoing links, we enforce at-most two links to be active. The supervised and un-supervised losses are embedded in a Bayesian framework for probabilistic learning. The classifier predictions are used to model flow variables for every edge in the graph. The cell lineages are solved by formulating it as an energy minimization problem with constraints using integer linear programming. The unsupervised loss adds a significant improvement in the prediction of the division classifier.
M. Razi, A. Narayan, RM. Kirby, D. Bedrov.
Fast predictive models based on multi-fidelity sampling of properties in molecular dynamics simulations, In Computational Materials Science, Vol. 152, Elsevier BV, pp. 125--133. September, 2018.
DOI: 10.1016/j.commatsci.2018.05.029
In this paper we introduce a novel approach for enhancing the sampling convergence for properties predicted by molecular dynamics. The proposed approach is based upon the construction of a multi-fidelity surrogate model using computational models with different levels of accuracy. While low fidelity models produce result with a lower level of accuracy and computational cost, in this framework they can provide the basis for identification of the optimal sparse sampling pattern for high fidelity models to construct an accurate surrogate model. Such an approach can provide a significant computational saving for the estimation of the quantities of interest for the underlying physical/engineering systems. In the present work, this methodology is demonstrated for molecular dynamics simulations of a Lennard-Jones fluid. Levels of multi-fidelity are defined based upon the integration time step employed in the simulation. The proposed approach is applied to two different canonical problems including (i) single component fluid and (ii) binary glass-forming mixture. The results show about 70% computational saving for the estimation of averaged properties of the systems such as total energy, self diffusion coefficient, radial distribution function and mean squared displacements with a reasonable accuracy.
M. Reblin, D. Ketcher, P. Forsyth, E. Mendivil, L. Kane, J. Pok, M. Meyer, Y.Wu, J. Agutter.
Outcomes of an electronic social network intervention with neuro-oncology patient family caregivers, In Journal of Neuro-Oncology, Springer Nature, pp. 1--7. May, 2018.
DOI: 10.1007/s11060-018-2909-2
Introduction
Informal family caregivers (FCG) are an integral and crucial human component in the cancer care continuum. However, research and interventions to help alleviate documented anxiety and burden on this group is lacking. To address the absence of effective interventions, we developed the electronic Support Network Assessment Program (eSNAP) which aims to automate the capture and visualization of social support, an important target for overall FCG support. This study seeks to describe the preliminary efficacy and outcomes of the eSNAP intervention.
Methods
Forty FCGs were enrolled into a longitudinal, two-group randomized design to compare the eSNAP intervention in caregivers of patients with primary brain tumors against controls who did not receive the intervention. Participants were followed for six weeks with questionnaires to assess demographics, caregiver burden, anxiety, depression, and social support. Questionnaires given at baseline (T1) and then 3-weeks (T2), and 6-weeks (T3) post baseline questionnaire.
Results
FCGs reported high caregiver burden and distress at baseline, with burden remaining stable over the course of the study. The intervention group was significantly less depressed, but anxiety remained stable across groups.
Conclusions
With the lessons learned and feedback obtained from FCGs, this study is the first step to developing an effective social support intervention to support FCGs and healthcare providers in improving cancer care.
A. Rodenhauser, W.W. Good, B. Zenger, J. Tate, K. Aras, B. Burton, R.S. Macleod.
PFEIFER: Preprocessing Framework for Electrograms Intermittently Fiducialized from Experimental Recordings, In The Journal of Open Source Software, Vol. 3, No. 21, The Open Journal, pp. 472. Jan, 2018.
DOI: 10.21105/joss.00472
Preprocessing Framework for Electrograms Intermittently Fiducialized from Experimental Recordings (PFEIFER) is a MATLAB Graphical User Interface designed to process bioelectric signals acquired from experiments.
PFEIFER was specifically designed to process electrocardiographic recordings from electrodes placed on or around the heart or on the body surface. Specific steps included in PFEIFER allow the user to remove some forms of noise, correct for signal drift, and mark specific instants or intervals in time (fiducialize) within all of the time sampled channels. PFEIFER includes many unique features that allow the user to process electrical signals in a consistent and time efficient manner, with additional options for advanced user configurations and input. PFEIFER is structured as a consolidated framework that provides many standard processing pipelines but also has flexibility to allow the user to customize many of the steps. PFEIFER allows the user to import time aligned cardiac electrical signals, semi-automatically determine fiducial markings from those signals, and perform computational tasks that prepare the signals for subsequent display and analysis.
U. Ruede, K. Willcox, L. C. McInnes, H. De Sterck, G. Biros, H. Bungartz, J. Corones, E. Cramer, J. Crowley, O. Ghattas, M. Gunzburger, M. Hanke, R. Harrison, M. Heroux, J. Hesthaven, P. Jimack, C. Johnson, K. E. Jordan, D. E. Keyes, R. Krause, V. Kumar, S. Mayer, J. Meza, K. M. Mrken, J. T. Oden, L. Petzold, P. Raghavan, S. M. Shontz, A. Trefethen, P. Turner, V. Voevodin, B. Wohlmuth,, C. S. Woodward.
Research and Education in Computational Science and Engineering, In SIAM Review, Vol. 60, No. 3, SIAM, pp. 707--754. Jan, 2018.
DOI: 10.1137/16m1096840
This report presents challenges, opportunities and directions for computational science and engineering (CSE) research and education for the next decade. Over the past two decades the field of CSE has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers with algorithmic inventions and software systems that transcend disciplines and scales. CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments—including the architectural complexity of extreme-scale computing, the data revolution and increased attention to data-driven discovery, and the specialization required to follow the applications to new frontiers—is redefining the scope and reach of the CSE endeavor. With these many current and expanding opportunities for the CSE field, there is a growing demand for CSE graduates and a need to expand CSE educational offerings. This need includes CSE programs at both the undergraduate and graduate levels, as well as continuing education and professional development programs, exploiting the synergy between computational science and data science. Yet, as institutions consider new and evolving educational programs, it is essential to consider the broader research challenges and opportunities that provide the context for CSE education and workforce development.
A. Sanderson, A. Humphrey, J. Schmidt, R. Sisneros.
Coupling the Uintah Framework and the VisIt Toolkit for Parallel In Situ Data Analysis and Visualization and Computational Steering, In High Performance Computing, June, 2018.
Data analysis and visualization are an essential part of the scientific discovery process. As HPC simulations have grown, I/O has become a bottleneck, which has required scientists to turn to in situ tools for simulation data exploration. Incorporating additional data, such as runtime performance data, into the analysis or I/O phases of a workflow is routinely avoided for fear of excaberting performance issues. The paper presents how the Uintah Framework, a suite of HPC libraries and applications for simulating complex chemical and physical reactions, was coupled with VisIt, an interactive analysis and visualization toolkit, to allow scientists to perform parallel in situ visualization of simulation and runtime performance data. An additional benefit of the coupling made it possible to create a "simulation dashboard" that allowed for in situ computational steering and visual debugging.
A. Sanderson, X. Tricoche.
Exploration of periodic flow fields, In 18th International Symposium on Flow Visualization, 2018.
One of the difficulties researchers face when exploring flow fields is understanding the respective strengths and limitations of the visualization and analysis techniques that can be applied to their particular problem. We consider in this paper the visualization of doubly periodic flow fields. Specifically, we compare and contrast two traditional visualization techniques, the Poincaré plot and the finite-time Lyapunov exponent (FTLE) plot with a technique recently proposed by the authors, which enhances the Poincaré plot with analytical results that reveal the topology. As is often the case, no single technique achieves a holistic visualization of the flow field that would address all the needs of the analysis. Instead, we show that additional insight can be gained from applying them in combination.
I. .J Schwerdt, A. Brenkmann, S. Martinson, B. D. Albrecht, S. Heffernan, M. R. Klosterman, T. Kirkham, T. Tasdizen, L. W. McDonald IV.
Nuclear proliferomics: A new field of study to identify signatures of nuclear materials as demonstrated on alpha-UO3, In Talanta, Vol. 186, Elsevier BV, pp. 433--444. Aug, 2018.
DOI: 10.1016/j.talanta.2018.04.092
The use of a limited set of signatures in nuclear forensics and nuclear safeguards may reduce the discriminating power for identifying unknown nuclear materials, or for verifying processing at existing facilities. Nuclear proliferomics is a proposed new field of study that advocates for the acquisition of large databases of nuclear material properties from a variety of analytical techniques. As demonstrated on a common uranium trioxide polymorph, α-UO3, in this paper, nuclear proliferomics increases the ability to improve confidence in identifying the processing history of nuclear materials. Specifically, α-UO3 was investigated from the calcination of unwashed uranyl peroxide at 350, 400, 450, 500, and 550 °C in air. Scanning electron microscopy (SEM) images were acquired of the surface morphology, and distinct qualitative differences are presented between unwashed and washed uranyl peroxide, as well as the calcination products from the unwashed uranyl peroxide at the investigated temperatures. Differential scanning calorimetry (DSC), UV–Vis spectrophotometry, powder X-ray diffraction (p-XRD), and thermogravimetric analysis-mass spectrometry (TGA-MS) were used to understand the source of these morphological differences as a function of calcination temperature. Additionally, the SEM images were manually segmented using Morphological Analysis for MAterials (MAMA) software to identify quantifiable differences in morphology for three different surface features present on the unwashed uranyl peroxide calcination products. No single quantifiable signature was sufficient to discern all calcination temperatures with a high degree of confidence; therefore, advanced statistical analysis was performed to allow the combination of a number of quantitative signatures, with their associated uncertainties, to allow for complete discernment by calcination history. Furthermore, machine learning was applied to the acquired SEM images to demonstrate automated discernment with at least 89% accuracy.
B. Summa, N. Faraj, C. Licorish, V. Pascucci.
Flexible Live‐Wire: Image Segmentation with Floating Anchors, In Computer Graphics Forum, Vol. 37, No. 2, Wiley, pp. 321-328. May, 2018.
DOI: 10.1111/cgf.13364
We introduce Flexible Live‐Wire, a generalization of the Live‐Wire interactive segmentation technique with floating anchors. In our approach, the user input for Live‐Wire is no longer limited to the setting of pixel‐level anchor nodes, but can use more general anchor sets. These sets can be of any dimension, size, or connectedness. The generality of the approach allows the design of a number of user interactions while providing the same functionality as the traditional Live‐Wire. In particular, we experiment with this new flexibility by designing four novel Live‐Wire interactions based on specific primitives: paint, pinch, probable, and pick anchors. These interactions are only a subset of the possibilities enabled by our generalization. Moreover, we discuss the computational aspects of this approach and provide practical solutions to alleviate any additional overhead. Finally, we illustrate our approach and new interactions through several example segmentations.
T. Tasdizen, M. Sajjadi, M. Javanmardi, N. Ramesh.
Improving the robustness of convolutional networks to appearance variability in biomedical images, In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, April, 2018.
DOI: 10.1109/isbi.2018.8363636
While convolutional neural networks (CNN) produce state-of-the-art results in many applications including biomedical image analysis, they are not robust to variability in the data that is not well represented by the training set. An important source of variability in biomedical images is the appearance of objects such as contrast and texture due to different imaging settings. We introduce the neighborhood similarity layer (NSL) which can be used in a CNN to improve robustness to changes in the appearance of objects that are not well represented by the training data. The proposed NSL transforms its input feature map at a given pixel by computing its similarity to the surrounding neighborhood. This transformation is spatially varying, hence not a convolution. It is differentiable; therefore, networks including the proposed layer can be trained in an end-to-end manner. We demonstrate the advantages of the NSL for the vasculature segmentation and cell detection problems.
S. Thomas, J. Silvernagel, N. Angel, E. Kholmovski, E. Ghafoori, N. Hu, J. Ashton, D.J. Dosdall, R.S. MacLeod, R. Ranjan.
Higher contact force during radiofrequency ablation leads to a much larger increase in edema as compared to chronic lesion size, In Journal of Cardiovascular Electrophysiology, Wiley, June, 2018.
DOI: 10.1111/jce.13636
1 Introduction
Reversible edema is a part of any radiofrequency ablation but its relationship with contact force is unknown. The goal of this study was to characterize through histology and MRI, acute and chronic ablation lesions and reversible edema with contact force.
2 Methods and results
In a canine model (n = 14), chronic ventricular lesions were created with a 3.5‐mm tip ThermoCool SmartTouch (Biosense Webster) catheter at 25 W or 40 W for 30 seconds. Repeat ablation was performed after 3 months to create a second set of lesions (acute). Each ablation procedure was followed by in vivo T2‐weighted MRI for edema and late‐gadolinium enhancement (LGE) MRI for lesion characterization. For chronic lesions, the mean scar volumes at 25 W and 40 W were 77.8 ± 34.5 mm3 (n = 24) and 139.1 ± 69.7 mm3 (n = 12), respectively. The volume of chronic lesions increased (25 W: P < 0.001, 40 W: P < 0.001) with greater contact force. For acute lesions, the mean volumes of the lesion were 286.0 ± 129.8 mm3 (n = 19) and 422.1 ± 113.1 mm3 (n = 16) for 25 W and 40 W, respectively (P < 0.001 compared to chronic scar). On T2‐weighted MRI, the acute edema volume was on average 5.6–8.7 times higher than the acute lesion volume and increased with contact force (25 W: P = 0.001, 40 W: P = 0.011).
3 Conclusion
With increasing contact force, there is a marginal increase in lesion size but accompanied with a significantly larger edema. The reversible edema that is much larger than the chronic lesion volume may explain some of the chronic procedure failures.
Page 26 of 142