FIRST   v1.0

FMRIB's Integrated Registration and Segmentation Tool

subcortical brain segmentation using Bayesian shape & appearance models


Introduction

FIRST is a model-based segmentation/registration tool. The shape/appearance models used in FIRST are constructed from manually segmented images provided by the Center for Morphometric Analysis (CMA), MGH, Boston. The manual labels are parameterized as surface meshes and modelled as a point distribution model. Deformable surfaces are used to automatically parameterize the volumetric labels in terms of meshes; the deformable surfaces are constrained to preserve vertex correspondence across the training data. Furthermore, normalized intensities along the surface normals are sampled and modelled. The shape and appearance model is based on multivariate Gaussian assumptions. Shape is then expressed as a mean with modes of variation (principal components). Based on our learned models, FIRST searches through linear combinations of shape modes of variation for the most probable shape instance given the observed intensities in your T1 image.

For more information on FIRST, see the FMRIB technical report.


Referencing

Currently please reference the technical report and HBM 2007 abstract:

1. Brian Patenaude, Stephen Smith, David Kennedy, and Mark Jenkinson. FIRST - FMRIB's integrated registration and segmentation tool. In Human Brain Mapping Conference, 2007.

2.Brian Patenaude, Stephen Smith, David Kennedy, and Mark Jenkinson. Bayesian shape and appearance models. Technical report TR07BP1, FMRIB Centre - University of Oxford, 2007.


FIRST Training Data Contributors

We are very grateful for the training data for FIRST, particularly to David Kennedy at the CMA, and also to: Christian Haselgrove, Centre for Morphometric Analysis, Harvard; Bruce Fischl, Martinos Center for Biomedical Imaging, MGH; Janis Breeze and Jean Frazier, Child and Adolescent Neuropsychiatric Research Program, Cambridge Health Alliance; Larry Seidman and Jill Goldstein, Department of Psychiatry of Harvard Medical School.


Running FIRST

FIRST segmentation requires firstly that you run first_flirt to find the affine transformation to standard space, and secondly that you run run_first to segment a single structure (re-running it for each further structure that you require). Alternatively, you can use run_first_all, which does all of the above for you, including running run_first on evey subcortical structure in the models, and producing a summary segmentation image for all structures.


Output


Models


Boundary Correction with first_utils

The program first_utils is used for the classification of the boundary voxels. It takes the segmentation image output by FIRST and classifies the boundary voxels as belonging to the structure or not. The output volume will have only a single label. first_utils only works on a single structure at a time.

Usage:

first_utils --singleBoundaryCorr -i output_name -r im1 -p 3 -o output_name_corr


first_utils

first_utils may also be used to fill meshes, calculate Dice overlap, and perform vertex wise statistics.


concat_bvars


first_roi_slicesdir