Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2015


M.U. Ghani, S.D. Kanik, A.O. Argunsah, T. Tasdizen, D. Unay, M. Cetin. “Dendritic spine shape classification from two-photon microscopy images,” In 2015 23nd Signal Processing and Communications Applications Conference (SIU), IEEE, May, 2015.
DOI: 10.1109/siu.2015.7129985

ABSTRACT

Functional properties of a neuron are coupled with its morphology, particularly the morphology of dendritic spines. Spine volume has been used as the primary morphological parameter in order the characterize the structure and function coupling. However, this reductionist approach neglects the rich shape repertoire of dendritic spines. First step to incorporate spine shape information into functional coupling is classifying main spine shapes that were proposed in the literature. Due to the lack of reliable and fully automatic tools to analyze the morphology of the spines, such analysis is often performed manually, which is a laborious and time intensive task and prone to subjectivity. In this paper we present an automated approach to extract features using basic image processing techniques, and classify spines into mushroom or stubby by applying machine learning algorithms. Out of 50 manually segmented mushroom and stubby spines, Support Vector Machine was able to classify 98% of the spines correctly.



C. Jones, T. Liu, N.W. Cohan, M. Ellisman, T. Tasdizen. “Efficient semi-automatic 3D segmentation for neuron tracing in electron microscopy images,” In Journal of Neuroscience Methods, Vol. 246, Elsevier BV, pp. 13--21. May, 2015.
DOI: 10.1016/j.jneumeth.2015.03.005

ABSTRACT

Background
In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming.

New method
We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links.

Results
We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results.

Comparison with existing methods
Post-automatic correction methods have also been used in Mishchenko et al. (2010) and Haehn et al. (2014). These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as Jeong et al. (2009) and Cardona et al. (2010) and are inherently different than our method.

Conclusion
Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication.



F. Mesadi, M. Cetin, T. Tasdizen. “Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation,” In Lecture Notes in Computer Science, Springer International Publishing, pp. 703--710. 2015.
ISBN: 978-3-319-24574-4
DOI: 10.1007/978-3-319-24574-4_84

ABSTRACT

The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.



N. Ramesh, F. Mesadi, M. Cetin, T. Tasdizen. “Disjunctive normal shape models,” In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2015.
DOI: 10.1109/isbi.2015.7164170

ABSTRACT

A novel implicit parametric shape model is proposed for segmentation and analysis of medical images. Functions representing the shape of an object can be approximated as a union of N polytopes. Each polytope is obtained by the intersection of M half-spaces. The shape function can be approximated as a disjunction of conjunctions, using the disjunctive normal form. The shape model is initialized using seed points defined by the user. We define a cost function based on the Chan-Vese energy functional. The model is differentiable, hence, gradient based optimization algorithms are used to find the model parameters.



M. Sajjadi, M. Seyedhosseini,, T. Tasdizen. “Nonlinear Regression with Logistic Product Basis Networks,” In IEEE Signal Processing Letters, Vol. 22, No. 8, IEEE, pp. 1011--1015. Aug, 2015.
DOI: 10.1109/lsp.2014.2380791

ABSTRACT

We introduce a novel general regression model that is based on a linear combination of a new set of non-local basis functions that forms an effective feature space. We propose a training algorithm that learns all the model parameters simultaneously and offer an initialization scheme for parameters of the basis functions. We show through several experiments that the proposed method offers better coverage for high-dimensional space compared to local Gaussian basis functions and provides competitive performance in comparison to other state-of-the-art regression methods.



M. Seyedhosseini , T. Tasdizen. “Disjunctive normal random forests,” In Pattern Recognition, Vol. 48, No. 3, Elsevier BV, pp. 976--983. March, 2015.
DOI: 10.1016/j.patcog.2014.08.023

ABSTRACT

We develop a novel supervised learning/classification method, called disjunctive normal random forest (DNRF). A DNRF is an ensemble of randomly trained disjunctive normal decision trees (DNDT). To construct a DNDT, we formulate each decision tree in the random forest as a disjunction of rules, which are conjunctions of Boolean functions. We then approximate this disjunction of conjunctions with a differentiable function and approach the learning process as a risk minimization problem that incorporates the classification error into a single global objective function. The minimization problem is solved using gradient descent. DNRFs are able to learn complex decision boundaries and achieve low generalization error. We present experimental results demonstrating the improved performance of DNDTs and DNRFs over conventional decision trees and random forests. We also show the superior performance of DNRFs over state-of-the-art classification methods on benchmark datasets.



S.M. Seyedhosseini, S. Shushruth, T. Davis, J.M. Ichida, P.A. House, B. Greger, A. Angelucci, T. Tasdizen. “Informative features of local field potential signals in primary visual cortex during natural image stimulation,” In Journal of Neurophysiology, Vol. 113, No. 5, American Physiological Society, pp. 1520--1532. March, 2015.
DOI: 10.1152/jn.00278.2014

ABSTRACT

The local field potential (LFP) is of growing importance in neurophysiology as a metric of network activity and as a readout signal for use in brain-machine interfaces. However, there are uncertainties regarding the kind and visual field extent of information carried by LFP signals, as well as the specific features of the LFP signal conveying such information, especially under naturalistic conditions. To address these questions, we recorded LFP responses to natural images in V1 of awake and anesthetized macaques using Utah multielectrode arrays. First, we have shown that it is possible to identify presented natural images from the LFP responses they evoke using trained Gabor wavelet (GW) models. Because GW models were devised to explain the spiking responses of V1 cells, this finding suggests that local spiking activity and LFPs (thought to reflect primarily local synaptic activity) carry similar visual information. Second, models trained on scalar metrics, such as the evoked LFP response range, provide robust image identification, supporting the informative nature of even simple LFP features. Third, image identification is robust only for the first 300 ms following image presentation, and image information is not restricted to any of the spectral bands. This suggests that the short-latency broadband LFP response carries most information during natural scene viewing. Finally, best image identification was achieved by GW models incorporating information at the scale of ∼0.5° in size and trained using four different orientations. This suggests that during natural image viewing, LFPs carry stimulus-specific information at spatial scales corresponding to few orientation columns in macaque V1.


2014


T. Liu, C. Jones, M. Seyedhosseini, T. Tasdizen. “A modular hierarchical approach to 3D electron microscopy image segmentation,” In Journal of Neuroscience Methods, Vol. 226, No. 15, pp. 88--102. April, 2014.
DOI: 10.1016/j.jneumeth.2014.01.022

ABSTRACT

The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy.

Keywords: Image segmentation, Electron microscopy, Hierarchical segmentation, Semi-automatic segmentation, Neuron reconstruction



A. J. Perez, M. Seyedhosseini, T. J. Deerinck, E. A. Bushong, S. Panda, T. Tasdizen, M. H. Ellisman. “A workflow for the automatic segmentation of organelles in electron microscopy image stacks,” In Frontiers in Neuroanatomy, Vol. 8, No. 126, 2014.
DOI: 10.3389/fnana.2014.00126

ABSTRACT

Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime.



A. Perez, M. Seyedhosseini, T. Tasdizen, M. Ellisman. “Automated workflows for the morphological characterization of organelles in electron microscopy image stacks (LB72),” In The FASEB Journal, Vol. 28, No. 1 Supplement LB72, April, 2014.

ABSTRACT

Advances in three-dimensional electron microscopy (EM) have facilitated the collection of image stacks with a field-of-view that is large enough to cover a significant percentage of anatomical subdivisions at nano-resolution. When coupled with enhanced staining protocols, such techniques produce data that can be mined to establish the morphologies of all organelles across hundreds of whole cells in their in situ environments. Although instrument throughputs are approaching terabytes of data per day, image segmentation and analysis remain significant bottlenecks in achieving quantitative descriptions of whole cell organellomes. Here we describe computational workflows that achieve the automatic segmentation of organelles from regions of the central nervous system by applying supervised machine learning algorithms to slices of serial block-face scanning EM (SBEM) datasets. We also demonstrate that our workflows can be parallelized on supercomputing resources, resulting in a dramatic reduction of their run times. These methods significantly expedite the development of anatomical models at the subcellular scale and facilitate the study of how these models may be perturbed following pathological insults.



N. Ramesh, T. Tasdizen. “Cell tracking using particle filters with implicit convex shape model in 4D confocal microscopy images,” In 2014 IEEE International Conference on Image Processing (ICIP), IEEE, Oct, 2014.
DOI: 10.1109/icip.2014.7025089

ABSTRACT

Bayesian frameworks are commonly used in tracking algorithms. An important example is the particle filter, where a stochastic motion model describes the evolution of the state, and the observation model relates the noisy measurements to the state. Particle filters have been used to track the lineage of cells. Propagating the shape model of the cell through the particle filter is beneficial for tracking. We approximate arbitrary shapes of cells with a novel implicit convex function. The importance sampling step of the particle filter is defined using the cost associated with fitting our implicit convex shape model to the observations. Our technique is capable of tracking the lineage of cells for nonmitotic stages. We validate our algorithm by tracking the lineage of retinal and lens cells in zebrafish embryos.



M. Seyedhosseini, T. Tasdizen. “Disjunctive normal random forests,” In Pattern Recognition, September, 2014.
DOI: 10.1016/j.patcog.2014.08.023

ABSTRACT

We develop a novel supervised learning/classification method, called disjunctive normal random forest (DNRF). A DNRF is an ensemble of randomly trained disjunctive normal decision trees (DNDT). To construct a DNDT, we formulate each decision tree in the random forest as a disjunction of rules, which are conjunctions of Boolean functions. We then approximate this disjunction of conjunctions with a differentiable function and approach the learning process as a risk minimization problem that incorporates the classification error into a single global objective function. The minimization problem is solved using gradient descent. DNRFs are able to learn complex decision boundaries and achieve low generalization error. We present experimental results demonstrating the improved performance of DNDTs and DNRFs over conventional decision trees and random forests. We also show the superior performance of DNRFs over state-of-the-art classification methods on benchmark datasets.

Keywords: Random forest, Decision tree, Classifier, Supervised learning, Disjunctive normal form



M. Seyedhosseini, T. Tasdizen. “Scene Labeling with Contextual Hierarchical Models,” In CoRR, Vol. abs/1402.0595, 2014.

ABSTRACT

Scene labeling is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in scene labeling frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for scene labeling. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM outperforms state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).



T. Tasdizen, M. Seyedhosseini, T. Liu, C. Jones, E. Jurrus. “Image Segmentation for Connectomics Using Machine Learning,” In Computational Intelligence in Biomedical Imaging, Edited by Suzuki, Kenji, Springer New York, pp. 237--278. 2014.
ISBN: 978-1-4614-7244-5
DOI: 10.1007/978-1-4614-7245-2_10

ABSTRACT

Reconstruction of neural circuits at the microscopic scale of individual neurons and synapses, also known as connectomics, is an important challenge for neuroscience. While an important motivation of connectomics is providing anatomical ground truth for neural circuit models, the ability to decipher neural wiring maps at the individual cell level is also important in studies of many neurodegenerative diseases. Reconstruction of a neural circuit at the individual neuron level requires the use of electron microscopy images due to their extremely high resolution. Computational challenges include pixel-by-pixel annotation of these images into classes such as cell membrane, mitochondria and synaptic vesicles and the segmentation of individual neurons. State-of-the-art image analysis solutions are still far from the accuracy and robustness of human vision and biologists are still limited to studying small neural circuits using mostly manual analysis. In this chapter, we describe our image analysis pipeline that makes use of novel supervised machine learning techniques to tackle this problem.


2013


C. Jones, T. Liu, M. Ellisman, T. Tasdizen. “Semi-Automatic Neuron Segmentation in Electron Microscopy Images Via Sparse Labeling,” In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging (ISBI), pp. 1304--1307. April, 2013.
DOI: 10.1109/ISBI.2013.6556771

ABSTRACT

We introduce a novel method for utilizing user input to sparsely label membranes in electron microscopy images. Using gridlines as guides, the user marks where the guides cross the membrane to generate a sparsely labeled image. We use a best path algorithm to connect each of the sparse membrane labels. The resulting segmentation has a significantly better Rand error than automatic methods while requiring as little as 2\% of the image to be labeled.



C. Jones, M. Seyedhosseini, M. Ellisman, T. Tasdizen. “Neuron Segmentation in Electron Microscopy Images Using Partial Differential Equations,” In Proceedings of 2013 IEEE 10th International Symposium on Biomedical Imaging (ISBI), pp. 1457--1460. April, 2013.
DOI: 10.1109/ISBI.2013.6556809

ABSTRACT

In connectomics, neuroscientists seek to identify the synaptic connections between neurons. Segmentation of cell membranes using supervised learning algorithms on electron microscopy images of brain tissue is often done to assist in this effort. Here we present a partial differential equation with a novel growth term to improve the results of a supervised learning algorithm. We also introduce a new method for representing the resulting image that allows for a more dynamic thresholding to further improve the result. Using these two processes we are able to close small to medium sized gaps in the cell membrane detection and improve the Rand error by as much as 9\% over the initial supervised segmentation.



T. Liu, M. Seyedhosseini, M. Ellisman, T. Tasdizen. “Watershed Merge Forest Classification for Electron Microscopy Image Stack Segmentation,” In Proceedings of the 2013 International Conference on Image Processing, 2013.

ABSTRACT

Automated electron microscopy (EM) image analysis techniques can be tremendously helpful for connectomics research. In this paper, we extend our previous work [1] and propose a fully automatic method to utilize inter-section information for intra-section neuron segmentation of EM image stacks. A watershed merge forest is built via the watershed transform with each tree representing the region merging hierarchy of one 2D section in the stack. A section classifier is learned to identify the most likely region correspondence between adjacent sections. The inter-section information from such correspondence is incorporated to update the potentials of tree nodes. We resolve the merge forest using these potentials together with consistency constraints to acquire the final segmentation of the whole stack. We demonstrate that our method leads to notable segmentation accuracy improvement by experimenting with two types of EM image data sets.



N. Ramesh, T. Tasdizen. “Three-dimensional alignment and merging of confocal microscopy stacks,” In 2013 IEEE International Conference on Image Processing, IEEE, September, 2013.
DOI: 10.1109/icip.2013.6738297

ABSTRACT

We describe an efficient, robust, automated method for image alignment and merging of translated, rotated and flipped con-focal microscopy stacks. The samples are captured in both directions (top and bottom) to increase the SNR of the individual slices. We identify the overlapping region of the two stacks by using a variable depth Maximum Intensity Projection (MIP) in the z dimension. For each depth tested, the MIP images gives an estimate of the angle of rotation between the stacks and the shifts in the x and y directions using the Fourier Shift property in 2D. We use the estimated rotation angle, shifts in the x and y direction and align the images in the z direction. A linear blending technique based on a sigmoidal function is used to maximize the information from the stacks and combine them. We get maximum information gain as we combine stacks obtained from both directions.



M. Seyedhosseini, M. Ellisman, T. Tasdizen. “Segmentation of Mitochondria in Electron Microscopy Images using Algebraic Curves,” In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging (ISBI), pp. 860--863. 2013.
DOI: 10.1109/ISBI.2013.6556611

ABSTRACT

High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.



M. Seyedhosseini, T. Tasdizen. “Multi-class Multi-scale Series Contextual Model for Image Segmentation,” In IEEE Transactions on Image Processing, Vol. PP, No. 99, 2013.
DOI: 10.1109/TIP.2013.2274388

ABSTRACT

Contextual information has been widely used as a rich source of information to segment multiple objects in an image. A contextual model utilizes the relationships between the objects in a scene to facilitate object detection and segmentation. However, using contextual information from different objects in an effective way for object segmentation remains a difficult problem. In this paper, we introduce a novel framework, called multi-class multi-scale (MCMS) series contextual model, which uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates cross-object and inter-object information into one probabilistic framework and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. We demonstrate that our MCMS model improves object segmentation performance in electron microscopy images and provides a coherent segmentation of multiple objects. By speeding up the segmentation process, the proposed method will allow neurobiologists to move beyond individual specimens and analyze populations paving the way for understanding neurodegenerative diseases at the microscopic level.