Keywords

1 Introduction

Brain MRI, in the clinical practice, is a powerful imaging tool for diagnosing brain diseases. Unlike the past traditional measures such as history taking and physical examinations, MRI scan shows the lesion’s location, size, nature, and response after treatments accurately and quickly. Such diagnostic feature, if combined with anatomical knowledge and functional role of the structure, may greatly improve the accuracy and details of the diagnosis.

Brain can be classified into two parts – neural tissues (Nerve cell bodies and axons) and subsidiary part (vessels, stroma, matrix and other organizations) – and nerve cell part can be simplified as streams (pathways/tracts) of neuron bundles. Scanning techniques of MRI have been improved recently up to capture these streams of bundles, and Diffusion Tensor Imaging (DTI) is representative among those techniques. DTI techniques to reconstruct tractography is already being used in clinical researches such as epilepsy, and its inter-observer or intra-observer reliability is verified to be acceptable.

Space occupying lesions inside the brain or destructive brain lesions such as infection tend to deform the brain structures. In these cases, direct application of normal anatomical knowledge to conventional brain MRI might has limitations to locate the exact anatomic structures or even more limitations to predict the physiological deficits caused by the lesions due to deformed brain structures. But if it is possible to obtain tractography via DTI and locate the lesions onto the specific tracts, we may have got more accurate diagnosis of the lesions by tracking the lesion’s tract or bundle, regardless of anatomical deformations.

Nerve bundles in the brain do not simply link one part of the brain to one part of the body with a tract. The bundles exchange information with other surrounding tracts, nuclei and grey matters (crossing or kissing fibers). These compound interconnections make the interpretation of tractography expressed in 2D monitor quite difficult and complex. The purpose of this study was to reconstruct brain tractography in 3D virtual reality (VR) space, so as to make it easy to handle and to trace up and down, using Unity3D and Oculus Rift. These trials may contribute to understand exactly and conveniently the complex relations of these neural bundles.

2 Material

2.1 Study Design

We have reported previously the functional research of white matter tract, by observing 114 subcortical vascular cognitive impairment (SVCI) patients [1]. From this study, we obtained following three facts. FA (fractional anisotropy) values in the middle portion of CG (cingulum) were associated with scores in language, visuospatial, memory and frontal functions. FA values in the anterior portion of ATR (anterior thalamic radiation) were associated with scores in attention, memory and frontal executive functions, while FA values in its middle portion were associated with language function score. In SLF (superior-longitudinal fasciculus), FA values in the posterior portion were associated with visuospatial dysfunction while FA values in the middle portion were associated with memory impairment.

The study shows disconnection of specific white matter tracts, especially those neighboring and providing connections between gray matter regions important to certain cognitive functions, may contribute to specific cognitive impairments in patients with SVCI.

In this study, we get the T1 image and DTI-Tractography of the selected patient with SVCI involved in previous research, and we reconstruct these images into 3D objects able to be observed and handled using VR (virtual reality) device.

2.2 Image Acquisition

T1 and diffusion weighted images (DWI) were acquired from a subject at Samsung Medical Center using the same 3.0 T MRI scanner (Philips 3.0T Achieva). T1 weighted MRI data was recorded using the following imaging parameters: 1 mm sagittal slice thickness, over-contiguous slices with 50 % overlap; no gap; repetition time (TR) of 9.9 ms; echo time (TE) of 4.6 ms; flip angle of 8˚; and matrix size of 240 × 240 pixels, reconstructed to 480 × 480 over a 240 mm field of view. In the whole-brain diffusion weighted MRI examination, sets of axial diffusion-weighted single-shot echo-planar images were collected with the following parameters: 128 × 128 acquisition matrix, 1.72 × 1.72 × 2 mm3 voxels; reconstructed to1.72 × 1.72 × 2 mm3; 70 axial slices; 220 × 220mm2 field of view; TE 60 ms, TR 7383 ms; flip angle 90°; slice gap 0 mm; b-factor of 600 s/mm2. With the baseline image without diffusion weighting (the reference volume), diffusion-weighted images were acquired from 45 different directions. All axial sections were acquired parallel to the anterior commissure-posterior commissure line.

3 Method

3.1 Image Preprocessing

DTI images and structural MR images of each subject were acquired to be used in this study. All processes took the following steps: First, DTI images were corrected for Eddy current distortions, using FSL program. Second, HARDI-based deterministic tracking was executed, generating track file. Third, non-brain tissue from DTI images and structural MR images (T1) of the whole head was deleted using BET (Brain Ex-traction Tool). Forth, images were spatially adjusted to standard brain image. Registering a set of T1 images to a standard template was executed by FNIRT (FMRIB’s Linear Image Registration Tool), and registering DTI images to T1 images was executed by FLIRT (FMRIB’s Linear Image Registration Tool). Registered images were inversely adjusted to previous images again, getting spatially normalized images.

We obtained streamlines resulting from whole-brain tractography. The streamlines are grouped into following seven major fiber tracts, such as anterior thalamic radiation (ATR), cingulum (CG), corticospinal tract (CST), inferior fronto-occipital fasciculus (IFO), inferior-longitudinal fasciculus (ILF), superior-longitudinal fasciculus (SLF), and uncinate fasciculus (UNC), based on their shapes and positions [2].

Then, we computed cortical surface meshes using FreeSurfer v 5.1.0 (http://surfer.nmr.mgh.harvard.edu). FreeSurfer uses following algorithm: Outer and inner cortical surface meshes were first constructed from T1-weighted MR data. The inner surface represented the boundary between white matter and cortical gray matter, and the outer surface was defined as the exterior of the cortical gray matter. As the outer surface was constructed by deforming the inner surface, the two surface meshes are isomorphic, with the same number of vertices and edge connectivity.

Through FreeSurfer, we parcellated the cerebral cortex into 68 cortical ROIs based on the Desikan-Killiany Atlas [3] and obtained the cortical thickness of each cortical ROIs for each subject. We reconstructed T1-weighted MR images and ROI atlas volumes from FreeSurfer data of each patient for using coregistration with DWI images.

3.2 Platform

Unity Game Engine.

Unity3D (© Unity Technologies, San Francisco, California, USA. Unity3D ver. 5.3.2) is a tool which is mainly used when producing games of three-dimensional environments. It is comfortable to load and handle 3D objects from both inside and outside the program. We load T1 data and DTI data on Unity tools, each using somewhat different ways.

Developing environments in Unity3D has several steps. First we make New Project. Then Scene, Game, and Console window will show up. Most of the project making will take time in ‘Scene’ tab. The default scene contains one default camera and a directional light, which you can check out from the hierarchy tab. You may create some template 3D figures (cube, sphere, cylinder, etc.) from hierarchy tab, and if necessary, we can load other 3D objects from outside the Unity3D program by simply dragging the objects into project-assets tab, and then loading them on the scene or hierarchy tab.

T1 images which we obtained exist in form of ‘obj’ files. Since ‘fbx’ files are more suitable at processes like Motion Tracking or Animated movements [4], we recommend converting ‘obj’ files into ‘fbx’ files via programs such as ‘Autodesk FBX Converter (© 2016 Autodesk, FBX® 2013.3 Converter)’ before uploading the data on Unity scene. Such programs are freely provided from ‘autodesk.com’.

DTI data, unlike T1 data, exists in a form of text file which contains three-dimensional coordinate values of each fiber. To load these data on Unity3D scene, we coded a script which saves the number of bundles, names of each bundles, the number of fibers in each bundles, number of coordinates which constitute each fibers, and the coordinate values itself. Then connect every two coordinates with a thin cylinder for every fiber to implement the form of streamlines. This step was done by a code which instantiates a cylinder prefab (sample) between selected two vectors (3D coordinates). You may select the number of fibers or skip some coordinates depending on your computer’s specifications (at the given figure, half of the number of fibers, and one-fifth of the coordinate values were selected).

To differentiate one bundle from others, we colored each bundles in color red, blue, green, yellow, magenta, cyan, and black. This was done when instantiating the cylinders between coordinates, by reading the name of the bundle which the coordinate belongs to.

3.3 Device

Oculus Rift Development Kit 2.

Oculus Rift is a virtual reality (VR) device (© 2016 Oculus VR, LLC, Menlo Park, California, USA. Oculus Rift Development Kit 2). VR is a technology which is a simulated version of the real environment and can be experienced in three dimensions (3D). The device displays two adjacent images, one image for left eye and one for right eye. The alignment of the two lenses enables the zoom in-out and re-shaping the pictures for both eyes, which makes the picture a stereoscopic 3D image [5].

To link the Unity scene with Oculus device is simply done by downloading ‘Oculus Utilities for Unity 5 V0.1.3.0-beta’ from ‘developer.oculus.com’, then load the utilities on Unity3D project, and exchanging the default camera with camera and cardboard for Oculus Rift. The following figure shows the in-game view (the view we may see when we run the program) between default camera and oculus camera each (Fig. 1).

Fig. 1.
figure 1

View Comparison between Unity3D Default Camera (left) and Oculus-Provided Camera (right)

The screenshots are taken from two cameras at the same position of the Unity3D scene. Since Oculus rift have to send one screen to two eyes, the screen itself must be divided into two parts, which are already prepared to make stereoscopic 3D image (Fig. 2).

Fig. 2.
figure 2

Scene view of DTI Tractography

4 Results

The colors corresponding to the bundles are red (ATR), blue (CST), green (CG), yellow (IFO), magenta (ILF), cyan (UNC), and black (SLF).

Abbreviations:

ATR, anterior thalamic radiation; CST, corticospinal tract; CG, cingulum; IFO, inferior fronto-occipital fasciculus; ILF, inferior-longitudinal fasciculus; UNC, uncinate fasciculus; SLF, superior-longitudinal fasciculus.

The brighter part of the brain represents white matter, and the darker part represents gray matter. The observer may select the information he/she wants to see by clicking the toggle button at the left-bottom side. The object rotates in horizontal/vertical direction as direction key input value. The screenshot was taken after such operations in Unity3D.

Details are same as Fig. 3. The screenshot was taken at the same angle as Fig. 3, and the Toggle button ‘Show Streamlines’ was selected. Both the screen from Figs. 3 and 4 seems stereoscopic and three-dimensional when the observer is equipped with Oculus Rift.

Fig. 3.
figure 3

Oculus view of T1 image

Fig. 4.
figure 4

Oculus view of DTI Tractography

5 Conclusions

From this study, we obtained virtual image which was manipulable enough in three-dimensional environment. But several things are seem to be necessary – objectivity of the sample, applications of other samples, and comparison between normal brain and patient’s brain. We are expecting to complement these points, including the following ‘limitation of this study’, through future studies (Fig. 5).

Fig. 5.
figure 5

Free-handled images of DTI Tractography

5.1 Limitation of this Study

Exquisiteness of the DTI data.

Since we controlled the number of fibers and coordinates, we cannot tell whether the image is the exact form of the Streamlines obtained through DTI data. This problem can be complemented by upgrading the specification of the computer.

Controllability of the Image.

Options like zooming in-out through mouse wheel scroll, and rotating images through dragging instead of direction keys could be helpful to make the handling easier and more comfortable. If it is possible to use Motion tracking function of the Oculus Device, then the observer may feel more vivid VR experience. We may complement these problems by studying more about how to treat the input-outputs of information and user-interface (UI).

Non-Automatic Functions.

While T1 image is automatically laid at the center of the screen when loaded, the coordinate of DTI data is fixed somewhat differently when the data is obtained. Since the coordinate systems of the two data are different, we need to set the position information of one data to fit into the other manually. If we can refer to what coordinate system it is set to when the DTI data is obtained, then we may load the DTI data at the same position with the T1 data by automatically adjusting it.