Medical Image Registration and Visualization on Tumor Growth with Time Series
- 1. Department of Industrial and System Engineering, University of Illinois at UrbanaChampaign, USA
Abstract
Data registration is a common process in medical image analysis. The goal of data registration is to solve the transformation problem with multiple images’ alignment. Conventionally, diagnosing the tumors periodically requires understanding the growth and spread of tumor which is performed by doctors by visual inspections of multiple MRI scans taken over different stages in time series. Due to the misalignment of patient’s posture, comparison of these multiple MRI scans is tedious. This problem is addressed often using image registration of non-rigid body. However, this can be slow and hard to implement. On the other hand, rigid body registration is sometimes faster and easier to implement. The downside is that rigid body registration doesn’t usually take deformation into consideration. In this paper, two rigid body registration methods were explored, which are the BFP method and the PA method. Those results are later compared with the ICP method.
Index Terms
Rigid body registration; Tumor registration; BFP; PA.
Citation
Wang K, Kesavadas T (2016) Medical Image Registration and Visualization on Tumor Growth with Time Series. Comput Sci Eng 1(1): 1005.
ABBREVIATIONS
ICP: Iterative Closest Point; VTK: Virtual Tool Kit; MRS: Magnetic Resonance Spectroscopy; PET: Positron Emission Tomography; STL: Stereo Lithography; MRI: Magnetic Resonance Image; BFP: Best Fit Plane; PA: Principal Axes; FEA: Finite Element Analysis
INTRODUCTION
Image registration is a process of finding one common coordinate for different images. There are two common methods for image registration, one is deformable registration, and the other is rigid registration. Deformable registration has been used largely for tumor registration. Brock et al. used the FEA method for liver and tumor registration [1]. The downside is that it takes substantial processing time. Lamecker and Pennec [2] used a cost function to minimize the uncertainty of correspondences and unreliability of image intensity. This approach ignores the information around the lesion so it loses the importation information of the lesion. Jenkinson and Smith [3] implemented affine transformation (a linear mapping that can map points from one coordinate to another) of brain images.
The method tried the global optimization registration and the author claimed that it will be more likely to take less than 1 hour to run the program. Kaus et al. [4] , developed a surfacebased registration method and implemented on human organs. The author extracted control points and achieved a running time within a few seconds. However, manually selecting control points can be tedious. Mohamed [5] built a statistical model to constrain the registration.
Cuadra et al., employed Maxwell’s demons [6] registration algorithm with lesion growth model [7]. One of the drawbacks is that the seed location requires expert’s manual choice. Gendrinet et al. [8], investigated the feasibility of real-time organ motion monitoring with rigid body registration. Mang et al. [9], did an extensive study on the consistency of rigid body registration and suggested using normalized mutual information for image registration between time points. Brett et al. [10], combined both affine transformation and non-rigid registration in a single cost function and tried different threshold for the cost function.
In this paper, we explored two different rigid body registration approaches on tumor registration. One is BFP Registration and the other is PA Registration. Based on our knowledge, no one has ever implemented these two methods on tumor registration. The transformation matrix is found by finding the transformation of the BFP, the PA between the source image and the target image, respectively. Results are later compared with the best known approach, ICP.
In order to visualize the aligned tumors and compare them, VTK is used with different colors representing different time series of tumor. Doctors are able to compare between these tumors more easily.
The rest of paper is structured as follows: Section 3 introduces the method and the data we used for registration. Section 4 discusses and compares the results among the 3 methods. Section 5 lists some future work we plan to do.
MATERIALS AND METHODS
In this section, we provide a detailed description of the method we proposed and the data we used to generate the result.
Methods
BFP: Curve fitting is a usual technique engineers or scientists like to use when describing the behavior of the data from either simulation or experiment.
The idea we came up was to use BFP to fit the data, and then found the transformation matrix to transform the plane to the template plane. BFP belongs to polynomial (an expression of more than two algebraic terms, especially the sum of several terms that contain different powers of the same variable (s)) fit, in our case, it is a first order linear fit. Consider the plane as a set {Pi } where i goes from 1,…n, n is the number of data points. Pi is a 3*1 vector represented in 3D space as. The equation of a BFP is
z = ax + by +c (1)
The coefficient {a, b, c} of the equation can be computed with 3 non-linear points. Since we have more than 3 points, finding {a, b, c} essentially becomes an optimization problem. That is, to find a {a, b, c} that minimizes the error between the original points and the points projected onto the plane. The error between the original data set and the BFP is
(2)
Denoting X,Y,Z as 3 vectors,.In order to solve this problem, let’s rewrite the equation as
(3)
The equation now becomes a matrix multiplication format, Ax = Z. a ,b, c is solved by taking the pseudo-inverse of A.
(4)
As long as AT is full rank, a, b, c can be uniquely found.
The transformation matrix calculation is based on [11]. We assume a point-wise correspondence between the 2 BFP. The reason is that the rotation due to alignment is reasonably close and it will not exceed 90° .
PA Registration: Moment of inertia is commonly used in physics or other engineering fields, referring to its ability of resistance to the change of its motion. In computer vision, an image moment is some weighted average of pixels’ intensities, or a function, as described in section 3.1.2.1. The orientation of an image can be extracted from the image moment, which is referred to as PA. After getting the PA for both source image and template image, the transformation that is used to match the source image’s PA to the target’s PA can be used to transform the sourced image to the target image.
Moment of inertia: Based on [12,13], the 3D ordinary moments, mijk , is defined as follows
(5)
Where i, j, k are the integers and i+j+k is the order of the moment, f is a binary function. In most of laser scans or medical image scans, the 3D object is approximated by a tessellation of basic shapes, such as triangle or tetrahedral. Hence the discredited formulation is more useful. Let A be the set that contains all the triangles, . N is the number of element. f(x, y, z) is defined as
(6)
Considering only the tetrahedral case, the volume integral can be rewritten as a surface integral of each single element.
(7)
mijk is the ordinary moment for each individual element. Moment contains some information on image properties.
This is the definition of first and second order moment [14].
(Standardized position)
(Standardized orientation)
The moment we used for extracting rotation information is the secondary moment, which means i+j+k=2. It can be used to calculate principal axis, as we will see later.
PA: PA can be used to describe the orientation of an object. PA works well even when the shape of the object is not symmetric.
Denote as the eigenvector of moment of inertia. When the 3D object is centered at the origin, the PA is defined as the eigenvector of the inertia matrix [13,15].
where
(8)
(9)
Ambiguity Elimination: Based on the previous section definition, {} are the eigenvectors (a vector which, when operated on by a given operator, gives a scalar multiple of itself) of I matrix. Assume that {u1 , u2 , u3 } is the normalized eigenvector, the sign of ui cannot be uniquely determined. To see why,
(10)
There are eight groups of eigenvectors in total,{±u1, ±u2, ±u3}.This suggests that there are eight orientations that can describe the 3D object.
The ambiguity in this paper was eliminated based on Galvez and Canton [14]. First, the requirement of right-handed system will eliminate four combinations of PA. Then a heuristic procedure was developed to get rid of the rest three combinations.
Transformation matrix extraction: After extracting PA from both images, we need to find a way to transform sensed image’s axes to template’s axes. Given two sets of axes and X2, suppose they are at the same origin, the rotation matrix that transforms 1 set of axis to the other is just a linear transformation:
(11)
Where are 3*3 matrices. Because are principal axes, they are full rank and the inverse of either matrix exists, R matrix can be found by:
(12)
BFP: The following are the basic steps of BFP method.
Step 1: Load the STL file of the organ.
Step 2: Calculate the centroids of two data sets, p and p0.
Step 3: Find the BFP, more specifically, find 4 vertices used to form the BFP for two data sets.
Step 4: Calculate the rotation matrix R and translation matrix T.
Step 5: Transform the source data to target use the transformation found with the BFP.
PA: This is the steps for PA method.
Step 1: Import the STL file into a CAD software.
Step 2: Find the PA by using the values from the moment of inertia of the area at the centroid.
Step 3: Calculate the centroid of source image and target image sets.
Step 4: Translate all the tumor to the origin.
Step 5: Calculate the rotation matrix R with equation 12.
Step 6: Rotation the source image with rotation matrix R.
Tumor data representation
There are different modalities that can be used for collecting data of a tumor from a person, like CT scan, MRS and PET. The data we got was STL files on different time series, which belongs to OSF Saint Francis Medical Center. The data were initially taken from MRI scans. The doctors then did a 3D reconstruction using a medical image software called Osirix. This would give a 3D block of data, which contained lung, skeleton, bones, fat, heart, catheter and muscle. To extract a 3D organ or a tumor from the block, the doctor chose a threshold, which was used to separate different parts from the data. The data was saved as separate STL files.
The purpose of registration is to transform all other tumors from different time series to the first time series. In other words, the first time series is the target. Figure 1 shows the original data. Different time series of the tumor suggest that the tumor is shrinking.
Figure 1: Shows the original data. Different time series of the tumor suggest that the tumor is shrinking.
RESULTS AND DISCUSSION
Figure (2-4) shows PA Registration, BFP Registration and ICP, respectively.
Figure (2) shows the result of PA Registration performs better than BFP for translation. The rotation between two time series is different because the surrounding clusters do not match. This can be due to the reason that we sub sampled the tumor in order to feed into solid modeling software. The second moment of inertia is thus inaccurate. PA based method is popular because even the shape of the object is unknown, PA describe the original object well. Finding the transformation matrix is constant time so is getting the principal axis from a CAD software (as long as the number of vertices are small), so fast computation can be performed.
BFP is a simplistic way of finding the features of data. The underlying assumption is that the shape of the image from different time series does not change much. Ideally, if the segmentation from MRI image is good and the shape is well preserved, the BFP should be a good representation of the spatial location of the data throughout different time series. The run time for calculating the BFP is O (n). The run time for calculating transformation matrix is constant time because only 4 vertices are required for this calculation.
Figure 2: Shows the result of PA Registration performs better than BFP for translation
Figure (3) is the result for BFP Registration. Registration results show that the translation does not work well. It is hard to judge rotation because the shape of tumor changed. There are many parts in the tumor, while matching the core part of tumor is easy; matching all of the connected clusters is hard. As we can see, none of the above images match perfectly. The advantage of using BFP is its easy implementation and speed especially when shape of the two objects is close; it is a good representation of the data set features. However there are two main disadvantages. One is that when the tumor has significant deformation, the BFP may not represent the data well. The second is that when two hearts are upside down from each other, the one to one correspondence between 2 BFP changes. At this moment, there is no way to check whether the correspondence is correct.
Figure 3: Shows PA Registration, BFP Registration and ICP, respectively.
Figure (4) shows the result for ICP algorithm. ICP algorithm minimizes the total Euclidean distance between each point in the template and other time series. The result shows that each cluster of tumor is reasonably close to the template. However, ICP takes more than 4 hours to run with 20 iterations.
Figure 4: Shows PA Registration, BFP Registration and ICP, respectively.
CONCLUSION
In this paper, we studied three methods on tumor registration. Comparing the three methods, ICP seem to produce the best results. But ICP takes too much computing time. The workstation we used has Intel Core i7-5820K and nvidia geforce gtx 980 graphics card. With this configuration, PAR and BFT can be implemented a few minutes but for the case we studied, ICP took 4 hours to complete.
Several research challenges will be addressed in the future. First, instead of relying on a CAD software to calculate the second moment of inertia, we plan to calculate it internally through a new algorithm. It is worth comparing the results by taking into account all the points. Second, we will compare three approaches with a larger dataset from more patients. Third, we plan to test a new nonlinear registration method.
ACKNOWLEDGEMENTS
We would like to thank OSF Jump Simulation Center, specifically, Dr.Matthew Bramlet, Lela.G. Dimonte and Brent Cross generously provide us with all the image data we used for simulation.
REFERENCES
- Brock KK, Dawson LA, Sharpe MB, Moseley DJ, Jaffray DA. Feasibility of a novel deformable image registration technique to facilitate classification, targeting, and monitoring of tumor and normal tissue. Int J Radiat Oncol Biol Phys. 2006; 64: 1245-1254.
- Lamecker H, Pennec X. Atlas to image-with-tumor registration based on demons and deformation inpainting. InMICCAI Workshop on Computational Imaging Biomarkers for Tumors-From Qualitative to Quantitative. 2010.
- Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Med Image Anal. 2001; 5: 143-156.
- Kaus MR, Warfield SK, Nabavi A, Black PM, Jolesz FA, Kikinis R. Automated segmentation of MR images of brain tumors. Radiology. 2001; 218: 586-591.
- Mohamed A, Zacharaki EI, Shen D, Davatzikos C. Deformable registration of brain tumor images via a statistical model of tumor-induced deformation. Med Image Anal. 2006; 10: 752-763.
- Thirion JP. Image matching as a diffusion process: an analogy with Maxwell's demons. Med Image Anal. 1998; 2: 243-260.
- Cuadra MB, Pollo C, Bardera A, Cuisenaire O, Villemure JG, Thiran JP. Atlas-based segmentation of pathological MR brain images using a model of lesion growth. IEEE Trans Med Imaging. 2004; 23: 1301-1314.
- Gendrin C, Furtado H, Weber C, Bloch C, Figl M, Pawiro SA, et al. Monitoring tumor motion by real time 2D/3D registration during radiotherapy. Radiother Oncol. 2012; 102: 274-280.
- Mang A, Schnabel JA, Crum WR, Modat M, Camara-Rey O, Palm C, et al. Consistency of parametric registration in serial MRI studies of brain tumor progression. International Journal of Computer Assisted Radiology and Surgery. 2008; 3: 201-211.
- Brett M, Leff AP, Rorden C, Ashburner J. Spatial normalization of brain images with focal lesions using cost function masking. Neuroimage. 2001; 14: 486-500.
- Arun KS, Huang TS, Blostein SD. Least-squares fitting of two 3-D point sets. IEEE Trans Pattern Anal Mach Intell. 1987; 9: 698-700.
- Sadjadi FA, Hall EL. Three-dimensional moment invariants. IEEE Trans Pattern Anal Mach Intell. 1980; 2: 127-136.
- Reeves AP, Wittner BS. Shape analysis of three dimensional objects using the method of moments. InIEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1983.
- Galvez JM, Canton M. Normalization and shape recognition of three-dimensional objects by 3D moments. Pattern Recognition. 1993; 26: 667-681
- Faber TL, Stokely EM. Orientation of 3-D structures in medical images. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1988; 10: 626-633.