On a recent weekend trip to visit a friend, I had a wonderful opportunity to get my brain scanned by a Siemens MRI machine as part of an experiment at the Princeton Neuroscience Institute. While the science of brains is way beyond me, I was fascinated by the infinite possibilities we have to learn about brain function and memory from high-resolution neuroimaging through fMRI and other techniques. Needless to say, I probably annoyed some of the staff with my probing questions when they were just trying to get shit done so they could go home that Saturday.
I’m not totally oblivious to medical imaging. For fun in college, I took a graduate-level project course taught by Tony Reeves on computer analysis of biomedical images. Like my typical forays into “taking a course for fun” at Cornell, it turned into sleepless nights in the ECE department’s Linux cluster. It was pretty cool though. For our course project, my friend and I chose to research and develop a virtual colonography classifier that could detect cancer-causing polyps in the colon. We requested data from NCI and Walter Reed Army Medical Center and got CT scans for almost 1000 patients who had the rather unfortunate procedure of having their colons flooded with barium fluid through the rectum. We developed a system that, in a nutshell: segmented the colon out of the CT image set, generated a polygonized colon surface, computed surface index and curvedness for each tessellated surface unit, and then used the “special sauce” analysis we developed to produce and rank polyp candidates. Cutting this side story short, we were able to pretty accurately detect cancerous polyps in a person’s colon just with images – pretty awesome.
Here are Axial CT scans of a colon:
and a 3D reconstruction of a colon via polygonization of binary mask images:
Anyway I was in the MRI machine for while. It didn’t really feel weird at all – I just heard an occasional rotating sound coming from the machine. The images it gathered reminded me a lot of the colonography project I did in school, except that this takes images of the brain instead of the gastrointestinal area, and uses MRI instead of CT, so subjects don’t need to be flooded with barium fluid!
I took the scans home with me, so now it was time to check them out. I was given several files, the most important of which is the surface volume
.nii file that uses the NIH’s NIfTI-1 format. Here it is rendered using the AFNI software package, also from NIH:
$ afni -niml
This volume contains arrays (for each dimension) of cross-sectional slice images of my head at 1mm resolution. This basically means a lot of images of the inside of my head as though it was sliced off in 1mm increments – the magic of MRI makes it so you don’t have to really slice anything to see inside!
So the volume has my whole head imaged, but how about just seeing my brain? After all, that’s where NIH is pumping our tax dollars to understand! Well, turns out there is a software package called FreeSurfer that is widely used in this field, that implements a lot of the same techniques I used in school to segment and polygonize the colon from the lower-body CT images, but instead does it for the brain (and probably does much much more). The process of running FreeSurfer on the volume file to extract my brain from the rest of my head is a computationally-intensive and long process – it took 18 hours to run on Princeton’s compute cluster! A lot of different segmented surfaces are generated from this process to help researchers visualize brain activity, but the two most important surfaces for me were my brain’s left hemisphere file
lh.pial and right hemisphere file
rh.pial. I was able to browse through and see all the FreeSurfer-generated surfaces using a Surface Mapper program in the AFNI package:
$ suma -spec albert_both.spec -sv albert_SurfVol.nii.gz
I was really excited. So excited that I had the sudden urge to show this to everybody. You know, like close friends and family, and maybe more. This isn’t totally crazy, as I was told some participants were even excited enough to 3D-print their brain.
However, I didn’t want to go around telling people to download and install AFNI and run a shell command just to see my brain (think, can my grandma do this?). To begin with, AFNI was a pain in the ass to install on my Macbook – I needed to install XQuartz when I found out X11 wasn’t on Mac OS X by default, and then
suma insisted on loading
libglib from a fink-installed location which was annoying. So while I obliged and went the extra mile to set this all up, I’m wasn’t sure other people would.
Googling around, I was delighted to find a project called XTK that is able to read medical images and render them on a browser using WebGL. It’s surprisingly easy to work with.
To get an export of my whole brain, I used FreeSurfer to combine my left and right hemisphere files into a binary format that XTK can handle:
$ mris_convert --combinesurfs lh.pial rh.pial albert_both.pial
Pretty cool. The viewability of my brain will probably track with the implementation of WebGL specification in modern browsers. As of this writing, it works with the latest versions of Firefox and Chrome. You have to explicitly enable WebGL to see it in Safari, which is annoying but I’m sure this will change in the future. It seems to also work on the latest mobile browsers too (iPhone and iPad Safari included!).