You are currently browsing the monthly archive for October 2012.

I walk quite a lot. I’m lucky enough to live close enough to the UCL to be able to walk to work (which is quite a luxury in London) and so each morning and evening I trek the two-and-a-half miles or so between Bloomsbury and Islington. Fitness-wise, it keeps the wolf from the door, so to speak, and it saves the cost of a travelcard each month, which more than offsets the increased wear-and-tear on shoe leather.

But it doesn’t stop there, oh no. This weekend I’m off to the Lake District in Cumbria for a some proper walking. Walking that doesn’t involve anything as effete as a pavement. Walking that involves maps, Proper Boots and the sort of clothing that markets itself as Hi-Tech. In other words, I’m going hiking. There will be sandwiches, fresh air, and Ordnance Survey maps. Should be a lot of fun.

Hiking is something that’s in the family. My Grandfather, George, started going to the Lakes back in the 1930s and kept on going for the next sixty years. My dad and his brother and sister were all raised on hiking and climbing and they in turn dragged my generation along as well. Early on, Granddad joined that most venerable of Lake District institutions, the Fell&Rock Climbing Club and later started a club of his own: The Tuesday Climbing Club.

Since he died, the Tuesday Climbing Club have held a weekend of walking in George’s honour, and this weekend is the George Hall memorial meet. This year there will be a small gaggle of Halls in attendance. If previous experience is anything to go by, we will attempt a day’s walking that really shouldn’t be tackled in less than three. This will involve at least four summits, at least one of which will be quite hard to identify until you’ve left it. It will be cold and (possibly) dry, but the air will be fresh and the views will be spectacular.

That, plus a sturdy pair of ear-plugs to keep out the snoring when sharing a dorm with my aforementioned relatives (at least the male ones) and it should be a fine weekend. Buttermere here we come.


As promised, here are some code snippets for the visualiser. First up, assemble the vertex and normals buffers.

Constructing the geometry

In my case, I’ve already read the data into Triangle objects which hold their vertices and normals. I rearrange things into FloatBuffer objects

Collection<Triangle> triangles= Seer.triangles;
scale= (float)(1.0/Seer.maxSubsSize);

    vertexCount= triangles.size()*3; // three vertices per triangle
    vertices= BufferUtil.newFloatBuffer(vertexCount*3); // three coords per vertex
    normals= BufferUtil.newFloatBuffer(vertexCount*3); // one normal per vertex

    // add all triangles in the mesh to the vertex float buffer
    for(Iterator<Triangle> triIt= triangles.iterator(); triIt.hasNext(); ){

    Triangle triangle=;
    double[] normal= triangle.getNormal();

    for(int i=0; i<3; i++){ // loop over vertices in the triangle
        double[] vertex= triangle.getVertex(i);

        for(int j=0; j<3; j++){ // loop over coords of the vertex
            vertices.put(substrateToGL(vertex[j], j)); // transform substrate space into openGL coords
            normals.put(-(float)normal[j]); // normals are (ahem) normalised and polar, so no transform

I loop over the Collection of Triangles and re-parse everything into FloatBuffers for vertices an normals which are then flipped to reverse the order.

There are a couple of details to note: first, normal are repeated three times (once for each vertex), and second that I’ve flipped the normals. This turns out to be important for correct rendering in my case because the code that generates the meshes in the first place always makes them point inward.

Build the VBOs

Next up we need to generate and bind the buffers for the VBO. We need separate buffers for vertices and normals (we’d need separate ones for colour and texture data if we were doing that as well).

// Generate bnd bind the Vertex buffer
 gl.glGenBuffersARB(1, VBOVertices, 0); // Get A Valid Name
 gl.glBindBufferARB(GL.GL_ARRAY_BUFFER_ARB, VBOVertices[0]); // Bind The Buffer
 // Load The Data
 gl.glBufferDataARB(GL.GL_ARRAY_BUFFER_ARB, vertexCount * 3 * BufferUtil.SIZEOF_FLOAT, vertices, GL.GL_STATIC_DRAW_ARB);
// generate and bind the normals buffer
 gl.glGenBuffersARB(1, VBONormals, 0);
 gl.glBindBufferARB(GL.GL_ARRAY_BUFFER_ARB, VBONormals[0]);
 //load the normals data
 gl.glBufferDataARB(GL.GL_ARRAY_BUFFER_ARB, vertexCount * 3 * BufferUtil.SIZEOF_FLOAT, normals, GL.GL_STATIC_DRAW_ARB);

vertices = null;
normals = null;

So here we’ve generated buffer “names” (which are jut integer identifiers), bound the buffer to the data identifier and further bound the name to the data. After that we don’t need the original FloatBuffers any more and can free the memory.

Drawing the object

Now we’re all set, and just need to be able to render the mesh whenever we feel like it. I’ve added a method to my mesh object that renders it which looks like this.

// Enable Pointers 


gl.glBindBufferARB(GL.GL_ARRAY_BUFFER_ARB, this.VBONormals[0]); 
gl.glNormalPointer(GL.GL_FLOAT, 0, 0); 

gl.glBindBufferARB(GL.GL_ARRAY_BUFFER_ARB, this.VBOVertices[0]); 
gl.glVertexPointer(3, GL.GL_FLOAT, 0, 0); 

// Set The Vertex Pointer To The Vertex Buffer 
gl.glDrawArrays(GL.GL_TRIANGLES, 0, this.vertexCount); // Draw All Of The Triangles At Once 

// Disable Pointers 
gl.glDisableClientState(GL.GL_VERTEX_ARRAY);  // Disable Vertex Arrays 
gl.glDisableClientState(GL.GL_NORMAL_ARRAY); // Disable Normal Arrays

What turns out to be important here is that you do the vertices LAST. Activate the client state for normals before the vertices, specify the normals pointer before the vertices. Then make the call to gl.glDrawArrays to actually instruct the card to render. I also disable the client states vertices first (opposite order to enabling) which may not be essential, but does work.

And we’re done…

That about wraps it up. I’ve been able to render meshes with about a million triangles at upwards of 60fps and 5 million at around 30fps. The complete application links to the Camino simulation and renders diffusive dynamics restricted by the mesh. The simulation ends up running in a separate thread so that it doesn’t pull down the frame rate in the visualiser, and also renders a small displacement plot in the lower left corner that co-rotates with the main plot. It’s also got arcball rotation and mouse-wheel zoom.

Right, that’ll do i think. All comments gratefully received 🙂

After a couple of more introspective blogs, this week I’m going o be a little more geeky. To whit: 3D graphics programming. I’m a fan of graphics coding – it involves interesting maths and methods and can also be quite rewarding when your vision for a little application actually appears on the screen. Everyone should have a hobby, even an exceptionally nerdy one.

All this meant that when the news came down that we would need to make a demo for the upcoming CMIC open day, I jumped at the opportunity to build a cool-looking 3D realisation of my diffusion simulation. The idea is to render the tissue substrate we use to restrict the motion of the spins and the diffusing spins themselves and a couple of plots of spin displacements. It’s written in JoGL and interfaces with the Camino project, which contains the simulation code.

I should say at this point that it’s not yet finished, so this is something of a part one, but I’ve learned some interesting things this week so I thought I’d blog it.

First up, there was the basecode. This opens a window, sets up OpenGL and the rendering environment, and  initialises an arcball so that you can rotate the view with the mouse. In the spirit of not inventing the wheel, I used some code that someone else had written. Specifically, IntelliJ’s port of the NeHe OpenGL tutorials, which I can highly recommend. Lesson 48 contains the arcball code.

The meshes from the diffusion simulation can be fairly big, with hundreds of thousands or millions of triangles so to render them efficiently meant doing something radical: using some OpenGL functionality that was developed after 1996. Shocking, I know, but needs must.

Back in 2001 when, as a fresh(er) faced young PhD student earning OpenGL seemed like a good idea, you rendered triangles one at a time with normals and colour specified in immediate mode. This, I have learned, is now rather akin to wearing a stove-pipe hat and working exclusively with steam-driven technoogy*. Instead there are these new-fangled things called Vertex Buffer Objects (VBOs).

A VBO is just a way of shunting as much of the rendering as possible off onto the GPU. Assemble your verte data into the right order, configure the graphics pipeline, shove the data into video RAM and them tell the card to get on with it.

It works VERY well.

I wanted to render my meshes with lighting, so I needed to specify normals as well as vertices. It turned out that find code example to construct and render VBOs with normals was a little hard to come by, so I ended up stumbling through this part on my own. I’ve got it working, though, and I’ll be posting some code snippets to show what I did. I’m not claiming this is the best code in the world, but it works and has pretty decent performance.

In the process of getting things working, I learned some important things:

  • VBOs can easily handle normals (and colours, and textures, for that matter) but OpenGL is a little fussy about the order in which you do things. You need to generate and bind the normals object and specify the normals pointer before the vertices or you’ll get an error. I’m sure there’s a good reason for this, but my knowledge is too superficial to know what it is.
  • Specifying projection geometry can be a tricky business. The view frustrum won’t work with a negative clipping plane, but more importantly a clipping plane at zero can cause your z-buffering to stop working (I presume this is due to singular values in some projection matrix). Moving the clipping plane away from zero will fix this.
  • By default OpenGL only lights one side of the triangles. This is great for a closed surface, but my meshes are unholy messes derived from stacks of microscope images – you cn see inside and need to render both sides of the triangles. This has nothing to do with the VBO or even the shader model, you change it my specifying a two-sided lighting model with glLightModelf(GL_LIGHT_MODEL_TWO_SIDE, 1.0f).
  • VBOs are almost supernaturally efficient. This morning I loaded a mesh with over 1000000 triangles. I can render it at over 30fps with no special sorting or optimisation at all on my laptop within my IDE.

So now I some code that renders a complex mesh with arcball rotation and lighting. I’ve added some extra functionality for a little graph in the bottom-left corner that I’ll be adding to over the next week or so. In the mean time, here’s a screenshot:

3d tissue model visualisation

A plant tissue mesh visualised with my new code

… as a bonus, I can render any mesh I’ve got a ply file for, so We can now simulate diffusion inside a cow

3D cow mesh rendering

Diffusion in hollow cows in an under-studied problem

or a sailing ship…

Thar be phase shifts ahead, me hearties!

This would make an intersting crossing-fibres problem

I’ll post some updates once there’s some more functionality. Next up: the diffusing particles, and proper displacement profile plots.


* i.e. kind of steampunk. Not quite what I was going for.

This week I got a new laptop. It’s very zingy, extremely thin and is so razor thin I’m afraid I might cut my fingers when I open it. This isn’t a blog entry about the joy of the new, however, it’s about tribes and tribalism. So, in the interests of full disclosure: my new laptop runs Windows. This turns out to be surprisingly important.

Being a scientist and card-carrying geek, things like what operating system your laptop uses and who manufactured it take on a disproportionate importance. In the lab I work in there’s a definite divide between those who use Macs and those who don’t, with some surprisingly strong feeling on both sides and before long a certain amount of joshing was inevitable.

Picture the scene: There’s me at my desk with new laptop looking pleased with myself, up comes an Apple-fan friend of mine. That’s new. Yes it is. Why did you choose that one? It’s thin and light. It’s not as light as my mac. No, but it’s thinner, got a bigger screen, a faster processor, and Bang&Olufson speakers.

And now my friend plays their trump card.

But it’s running Windows!

Gah! Defeated! I can’t stand Macs, I call as my friend vanishes back to their office.

And this little incident got me thinking: tribes. We fall so easily into little entrenched tribal groups. I’m every bit as guilty here as my friend, she likes Apple, I don’t. It’s all very good-natured, but it’s just the same as supporters of different football teams (disclaimer: I’m not for a moment saying that isn’t good-natured too!), drivers of different cars or natives of different villages.

In the end, this tribalism is part of being human. We look to support those close to us and undermine those who aren’t.  It’s something that shows up time and time again is different aspects of human behaviour: support those who are aligned to you, undermine those who aren’t. It applies on different scales, from tiny groups within families or offices up through loyalties to brands and national identity, and presumably would express itself in some way were we ever to make contact with intelligent extraterrestrials, a theme I’m not sure has ever been meaningfully explored in Science Fiction.

This tendency is also easy to manipulate, and is the reason that branding works as well as it does: convince people to identify with what you’re selling by giving it a symbols and associating something aspirational to it and bingo: loyal consumer base who are suddenly tapping into a very primal force when buying your products.

The day after my thorough taunting at the hands of Apple fan A, I bumped into Apple fan B – Apple fan B is by far the most enthusiastic supporter of the brand I have ever met (which is saying something – Apple customers tend to be very vocal in their support).

I duly braced myself for more cold derision but in act was pleasantly surprised. In fact, he paid me a very high compliment: Is that your new laptop? Yup, that’s the one.

Cool, he said — it looks like a Mac.

Twitter Updates