To perform accurate spatial analysis of the electrical recordings from large animal cardiac recording experiments, researchers must register the locations of each electrode, typically by defining correspondence points from post-experiment, three-dimensional imaging and directly measured surface electrodes. Often, due to the practical limitations of the experimental situation, only a subset of the electrode locations can be measured, from which the rest must be estimated. We have developed a pipeline, GRÖMeR, that can perform registration of cardiac surface electrode arrays given a limited correspondence point set. This pipeline accounts for global deformations and uses a modified iterative closest points algorithm followed by a geodesically constrained radial basis deformation to calculate a smooth, correspondence-driven registration. To assess the performance of this pipeline, we generated a series of target geometries and limited correspondence patterns based on experimental scenarios. We found that the best performing correspondence pattern required only 20 approximately uniformly distributed points over the epicardial surface of the heart. This study demonstrated the GRÖMeR pipeline to be an accurate and effective way to register cardiac sock electrode arrays from limited correspondence points.
A longstanding problem in machine learning is to find unsupervised methods that can learn the statistical structure of high dimensional signals. In recent years, GANs have gained much attention as a possible solution to the problem, and in particular, have shown the ability to generate remarkably realistic high resolution sampled images. At the same time, many authors have pointed out that GANs may fail to model the full distribution ("mode collapse") and that using the learned models for anything other than generating samples may be very difficult. In this paper, we examine the utility of GANs in learning statistical models of images by comparing them to perhaps the simplest statistical model, the Gaussian Mixture Model. First, we present a simple method to evaluate generative models based on relative proportions of samples that fall into predetermined bins. Unlike previous automatic methods for evaluating models, our method does not rely on an additional neural network nor does it require approximating intractable computations. Second, we compare the performance of GANs to GMMs trained on the same datasets. While GMMs have previously been shown to be successful in modeling small patches of images, we show how to train them on full sized images despite the high dimensionality. Our results show that GMMs can generate realistic samples (although less sharp than those of GANs) but also capture the full distribution, which GANs fail to do. Furthermore, GMMs allow efficient inference and explicit representation of the underlying statistical structure. Finally, we discuss how GMMs can be used to generate sharp images.