This week I’m going to talk about a little project I’ve been working on using the Raspberry Pi. For those unfortunate souls who haven’t heard a Raspberry Pi, it’s a stripped-down, ARM-powered linux box that costs about £25. Well, “box” is actually the wrong word seeing as it comes as a naked circuit board, but it’s powerful enough to run a complete Linux. The default system is a version of Debian Wheezey, called Raspbian.

These little things are pretty impressive — half a gig of RAM, ARM processor, GPU. Connectivity is also very good: two USB 2.0s, ethernet, HDMI and a load of tempting-looking little pins that someone more experienced with a soldering iron than I am could undoubtedly do wonders with. Add an SD card to act like a hard drive and you’re away.

I unwrapped one of Christmas morning, and the obvious question was “How can I really put this thing through its paces?”. As I mentioned in a post elsewhere, the obvious thing for me to try was to port over Camino, the Diffusion MRI toolkit I work on and try running some analyses. The results were pretty impressive, so this week I’m going to post instructions for what I did. This is a companion piece to my post on the Raspberry Pi website.

I’d estimate that to go from nothing to the first set of images will take y0u about 40 mins to 1 hour, depending on your confidence with the platform. The final image will take the Raspi about 2 hours to produce, but it’ll just work away by itself so you could go off and not worry about it. Also, all the commands in this post run from a terminal window, and to get the images actually on screen you should have a shell terminal open on the Raspian desktop.

Setting up

My set-up is pretty vanilla. I’ve installed Raspian Wheezey using the SD-card image image from the Raspberry Pi website. That’s the first version, not the “soft float” version (this is important for numerical efficiency).

I burned this to a high-speed, 16 gig SD card – a Samsung Class 10 MB-SPABAEU. I went for a high-speed card because there’s a lot of disk-access in what I’m doing and thought this might help. Other cards should also work just as well.

I allowed the image to automatically resize the partition on the SD card, and left the memory split between CPU and GPU at the default value.

Java

The next ingredient is the Java Runtime Environment (JRE). For the uninitiated, this is the box of tricks that allows the Raspi to run Java. Crucially, don’t use the one from RaspberryPi.org or the default download for Raspberry Pi from Oracle. These do floating-point arithmetic in software and requires the soft-float Raspian OS, which will slow things down too much to make any of this practical.

The right JRE to use is the Java 8 for ARM developers’ Preview. This implements hardware floating point and has the added advantage of being bang up-to-date (as of Feb 2013!). Accept the license agreement (assuming you don’t disagree with it, naturally) and download the zip archive. You can do this from the Raspian desktop using Midori.

One you’ve downloaded the zip archive, there are installation instructions here. Don’t worry — it isn’t difficult to do, only takes a couple of minutes. Make sure you change the path variables as they suggest, this makes things much easier further down the line.

Camino

Now we’re getting to the more interesting bits. In order to run Diffusion Image analysis, we need some software that knows how to do it. As a totally unbiased observer, I recommend the software that was written by my colleagues and I: Camino. Camino is open source, and available for free. It lives here, and the download section is here.

The two options only differ in whether you prefer a tar.gz or a tar.bz2 – the compression. You can install either of them on your raspi with

sudo apt-get install gzip

or

sudo apt-get install bzip2

I recommend making a folder to put Camino in

mkdir camino

and moving the camino archive in there. From here on I’ll refer to this folder as the Camino root directory.

Installation instructions for Camino are here. You can safely ignore the first step about Java heap size, it’s not important here. Follow the Linux/Unix instructions in section 2.
Unpacking the archive should be fairly quick, building the code (the step where you type

make

Will take a couple of minutes, so don’t worry if it doesn’t happen immediately. You’ll get a sequence of messages saying that different commands are being build (about 30 of them). Once it’s finished, you’ll get your command prompt back.

Again, do make the recommended changed to your $PATH variable – without this things can get annoying.

One small work of warning: Camino is research software and so can be tricky to use. It don’t have a point-and-click graphical interface, but instead works with commands that you type into a shell window. I’ll guide you through them.

It also goes without saying that all of this is for demonstration purposes only.

The good bit

If you’ve got this far, you’re ready to go! To generate some images you’ll need two things: some data, and the right commands. Both of these can be found in the tutorials section of the Camino website. Specifically, the DTI tutorial.

The DTI tutorial is a detailed tutorial aimed at researchers, and running through the whole thing is not for the faint-hearted! Also, some of the techniques are a bi much for the Raspi, so instead I’ll post a sequence of commands here for you to try.

First, download the example data in section 2 and unzip the archive to the camino root directory and unpack it.

Now follow the instructions in section 3 about making a schemefile. This is a file that tells Camino about the scan sequence used to acquire the data – it’s a necessary step, but you don’t have to understand what’s in it. Camino understands it!

Next, run the data conversion step:

image2voxel -4dimage 4Ddwi_b1000.nii.gz -outputfile dwi.Bfloat

And (finally!) we can run the analysis. First, we fit a diffusion tensor:

modelfit -inputfile dwi.Bfloat -schemefile 4Ddwi_b1000_bvector.scheme
 -model ldt -bgmask brain_mask.nii.gz -outputfile dt.Bdouble

The raspi will sit there for a couple of minutes after this – it’s a bit calculation for a small machine. Once your command prompt reappears, you can use what you’ve just made to make an FA map (that’s the measure of directedness)

cat dt.Bdouble | fa | voxel2image -outputroot fa -header 4Ddwi_b1000.nii.gz

and also get the tissue directions:

cat dt.Bdouble | dteig > dteig.Bdouble

The cat command just sends the data file to the command (it’s short for conCATenate). No felines involved!.

And now we’ve got enough to make our first image! Camino’s image viewer is called pdview, and we can use it to display the images we’ve just made.

pdview -inputfile dteig.Bdouble -scalarfile fa.nii.gz

This will run a little slowly, but it’ll get there! With a little patience and understanging you should see a colour FA map with directions. You should be able to use pdview to move around in the data (it’s 3D, changing the slice number will move up and down through the brain), but you’ll have to be a little patient. You can switch the angle by clicking the axial/sagital/coronal buttons at the top. You can also switch the direction lines on and off by toggling the “show vectors” box in the top left corner.

I used this program to make most of the images in the blog post, grabbing desktop images using

scrot

(stifling a giggle here? Shame on you! If you need to know more about this command, try “man scrot”…)

More advanced imaging

So, the next bit I tried was Q-ball imaging. This is a more advance technique and requires more steps, but the Raspi is well up to the challenge, so it’s definitely worth a go.

First we do a bit of pre-processing to tell Camino how complex each voxel is:

voxelclassify -inputfile dwi.Bfloat -bgthresh 200 
-schemefile 4Ddwi_b1000_bvector.scheme -order 4 > dwi_VC.Bdouble

and

voxelclassify -inputfile dwi.Bfloat -bgthresh 200 
-schemefile 4Ddwi_b1000_bvector.scheme -order 4 
-ftest 1.0E-09 1.0E-01 1.0E-03 > dwi_VC.Bint

Now we generate an Q-Ball analysis matrix

qballmx -schemefile 4Ddwi_b1000_bvector.scheme > qballMatrix.Bdouble

and then run the Q-Ball analysis. This will work away for about 15 minutes, so it might be time for a cup of tea.

linrecon dwi.Bfloat 4Ddwi_b1000_bvector.scheme qballMatrix.Bdouble
 -normalize -bgmask brain_mask.nii.gz > dwi_ODFs.Bdouble

Now we’re ready for the final step: creating the Q-Ball image. First, we need an FA map in a slightly different format than we currently have. We can generate that using

 fa < dt.Bdouble > fa.img 

Now we need to split the image into slices. We do this with the unix split command:

split -b $((112*112*(246+2)*8)) dwi_ODFs.Bdouble splitBrain/dwi_ODFs_slice
split -b $((112*112*8)) fa.img splitBrain/fa_slice

The final step is to use another of Camino’s image generators to build the image. This is the really lengthy step – it will take over two hours, but trust me, it’ll work!

sfplot -inputmodel rbf -rbfpointset 246 -rbfsigma 0.2618 
-xsize 112 -ysize 112 -minifigsize 20 20 -minifigseparation 2 2 
-minmaxnorm -dircolcode -projection 1 -2 
-backdrop splitBrain/fa_slicear < splitBrain/dwi_ODFs_slicear 
> dwi_ODFs_slicear.rgb

When this finally finishes, you can view it using a program like imagemagick. This isn’t installed by default but it’s a great program and I highly recommend using it. You can install it in the usual way

sudo apt-get install imagemagick

(Notice there’s a ‘k’ on the end of ‘imagemagick’. The apt-get will fail if you mis-spell it (no pun intended…)

To display the image you’ve just made you’ll need to tell imagemagick what size the image is. This should have been printed out by the previous command and you can just cut and paste it as-is into you’re dislay command. If that size is, say 2586×2586, then the command to display the image is:

display -size 2586x2586 dwi_ODFs_slicear.rgb

And there you have it! complete instructions to reproduce what I did in my Raspberry Pi blog post. Hope you enjoyed it, feel free to get in touch if you want to know more, or let me know how you got on.

See you again!

About these ads