It’s been a while since I posted anything techie, so this week I thought I’d revisit the visualiser I mentioned a while back. Before we continue, however, a short disclaimer: this post contains maths and (a little) code. The maths involves nothing more strenuous than vector cross-products, and the code is a few lines of JoGL but if those things don’t float your boat then by all means feel free to ignore the rest of this post. I won’t hold it against you.

So, that said: I’ve been doing some work adding interactivity to Seer, the visualiser for the Camino Monte-Carlo simulation. The visualiser is primarily a public engagement and demonstration tool, but it’s also pretty handy for debugging. I’ve talked about it before here and here. What the Seer does is visualise a live diffusion MRI simulation. It shows the diffusion environment and the positions of diffusing particles. It currently looks like this:

visualising walkers on a mesh

What we’ve got here is the tissue mesh I’ve talked about before with spins executing random walks rendered in red. Their positions are update by a Camino simulation running on the same mesh in a separate thread. The smaller plot in the bottom-left is a 3D scatter plot of the net displacement of each spin, also updated live. This shows the sort of data that we would measure from the simulation using a diffusion MRI pulse sequence.

Point picking & JoGL code

What we decided, though, was that it needed a bit more interactivity. Specifically we wnt to be able to reset all the spins to a location that you select with the mouse. Since I’m already using a single click to move, the right mousebutton to reset the visualisation, and the mouse wheel to zoom, I decided to go with a double-click to set spin positions.

This presents an interesting challenge, though. How do you translate the mouse coordinates (which are 2D) into a position in 3D space? Setting out I had this plan about projecting a vector into the OpenGL viewport and checking for intersections with the mesh, sorting them along the arclength and then projecting into visualisation (modelview) coords and then on to simulation (substrate) coordinates. What was quite nice, though, was that it turns out thatOpenGL, or rather the GLU API, does quite a bit of this for you.

Point picking works by taking the coordinates of the pixel you click on, converting to a position in the plane at the frot of the view frustrum (this can be done in 2D), then projecting into the scene along the current z-axis until you hit something. You then use the z-coordinate of the object as your third coordinate. This gives you a 3D point that you then project into the model coordinates via the current projection and modelview matrices. GLU provides methods to do this, specifically they’re called glReadPixels() and gluUnProject(). There’s an excellent tutorial on NeHe’s website here.

Because this is a tutorial for OpenGL in C/C++, I’ll also add my code snippet in JoGL:


public final void resetWalkerPositions(GLAutoDrawable drawable, Mesh mesh){


GL gl= drawable.getGL();


IntBuffer viewport= BufferUtil.newIntBuffer(4);
DoubleBuffer modelview= BufferUtil.newDoubleBuffer(16);
DoubleBuffer projection= BufferUtil.newDoubleBuffer(16);


gl.glGetIntegerv(GL.GL_VIEWPORT, viewport);


int winx= walkerX;
int winy= viewport.get(3)-walkerY;


FloatBuffer posZ= BufferUtil.newFloatBuffer(1);
DoubleBuffer pos= BufferUtil.newDoubleBuffer(3);


gl.glReadPixels(winx, winy, 1, 1, GL.GL_DEPTH_COMPONENT, GL.GL_FLOAT, posZ);


gl.glGetDoublev(GL.GL_MODELVIEW_MATRIX, modelview);
gl.glGetDoublev(GL.GL_PROJECTION_MATRIX, projection);


glu.gluUnProject((double)winx, (double)winy, (double)posZ.get(0), modelview, projection, viewport, pos);


// transform into substrate coords
boolean onMesh= mesh.GLtoSubstrate(pos, subsCoords);


// if the coordinates are on the mesh, reset walker positions
if(onMesh){
// tell the simulation thread to reset walker positions
SimulationThread.resetWalkers(subsCoords);
}
else{
System.err.println("coords are off-mesh. walkers not reset.");
}


}

In addition to the use of gluUnProject(), there’s one additional JoGL-specific issue here: the GL object itself. The way I’d designed the code meant that the method that catches the double-click event was nowhere near the rendering code that does the unprojection and talks to the simulation. I spent a bit of time trying to get hold of a GL object and hand it over to the event handler, but nothing I tried worked so instead I realised that all the event handler actually needed to do was to provide the mouse coordinates and instruct the render method to do the rest. So all it does is set a flag and hand over the coords via a class-level variable. That’s a theme that’s emerged a little recently: making instructions to other parts of the system via global flags rather than method calls. It works pretty well when you’ve got functionality that’s spread across different parts f the code. (I suppose I could also have used static variables but the principle is the same and this way things are more self-contained).

Planes, projection and a bit of maths

So: sorted. Well, actually no. Unfortunately The meshes that I’m clocking on have a lot of holes in them, and sometimes I was to click on a hole instead of a triangle. In this case, glUnProject() gives a point at infinity, which isn’t what I want. I want a point half way across my substrate. This means there’s a special case to catch. Fortunately, points at infinity are easy enough to catch as the coordinate will be equal to 1, but what to do once you’ve caught it?

Firstly, we need to recognise that this is essentially a projection into a plane. The plane in question bisects the substrate half way along its z-axis ans so is easily defined but in viewport coords will depend on the current view of the mesh (the modelview matrix). Given a plane ax + by + cz + d =0 , and a point \left( X, Y, Z \right) we just choose a new z-coord such that

Z' = -\frac{aX + bY +d}{c}

The tricky part is knowing what a b , c and d are. My initial thought was to back rotate into substrate coords and project into the appropriate plane, but this requires you to invert the modelview matrix, which frankly a cannot be bothered to write code to do (and in any case is an expensive operation) so I need to be working in viewport coordinates, not modelview coordinates. So then I thought I’d use the modelview matrix to rotate the plane normal but it turns out that plane normals transform with the inverse of the rotation matrix so once gain we’re back to square one.

The answer is to define the plan using three points and use the cross product to get the plane normal. Any three non-collinear points define a plane. These points transform using the modelview matrix, not the inverse, and the the components of the normal to the plane are the coefficients we want. The algebr works out like this [Cracks knuckles], [flexes fingers in the manner of concert pianist]:

\hat{\mathbf{n}}= \frac{\mathbf{n}}{|\mathbf{n}|}

\mathbf{n} = \left(\mathbf{v}_3 \times \mathbf{v}_1\right)-\left(\mathbf{v}_2 \times \mathbf{v}_1\right)-\left(\mathbf{v}_3 \times \mathbf{v}_2\right)

because

(a - b) \times (c - b) = a \times (c-b) - b \times (c-b) = (c-b) \times b - (c-b) \times a

(a - b) \times (c - b) = c \times b - b \times b - c \times a + b \times a

(a - b) \times (c - b) = c \times b - c \times a + b \times a

and we’re away. Three cross products and no matrix inversion.

I’ll call it a day there. I’ll post some code snippets once they’re done.

Advertisements