Mass. The stuff of stuff. Tough to move about, painful if you run into it, and a tendency for big things to pull together. Recently I’ve been pondering mass, and given the stunning work on the Higgs boson that’s going on at the moment, I thought I might share some thoughts.

So: mass. It seems a pretty intuitive concept. Push (or pull) something, and the heavier it is the more it resists the force you apply. This is encapsulated in physics in Newton’s second law (F=ma) and is fundamental to classical mechanics. It’s so intuitive and well-established in physics that its easy to forget that although Newton’s laws are astoundingly accurate, useful, and powerful they don’t say anything about what mass actually is.

One thing its definitely not, is easy to explain or well-understood. For one thing, there are two concepts of mass in physics. There’s the mass I just mentioned, Inertial mass which is an object’s resistance to changes in its motion, but there is also Gravitational mass, which is the property that causes massive objects to attract each other – this is the stuff of both Newton’s law of gravitation and Einstein’s general relativity. If you think about it, these are in fact quite different things: one is the response of an object to being shoved around, the other is its tendency to attract other masses. In strict terms these are to completely different interactions: if I push something I’m interacting with it via the electromagnetic force (yes, I am. electro-chemical processes in my brain and muscles move my hands (or whatever) to an object an push against it. On the smallest scale this is mediated by the electrons in the molecules of my skin cells repelling the electrons in the atoms at the surface of whatever I’m pushing), whereas gravity is usually thought o be a separate fundamental interaction. General relativity models it as the object curving the space it sits in. Two completely different forces acting in completely different ways.

Looked at from this perspective, it almost seems to be a coincidence that both of these concepts have the same name – why call them both mass? Aren’t we conflating two different things? So here’s something worth pondering: if we measure the inertial mass of an object and then measure the gravitational mass of the same object they are exactly the same. And that, if you think about it, is pretty odd. There have been some astonishingly accurate comparisons of inertial and gravitational mass over the years, and none of them have ever detected any measurable difference between them.

So, enter Einstein. The usual explanation of the equality of inertial and gravitational mass is the Principle of Equivalence. Einstein realised that inertial and gravitational mass are connected via acceleration. Imagine you’re in space, floating, weightless. You’re not experiencing any forces, including gravity. Now imagine you’re in a lift in a very, very tall building. When the lift is stationary you feel gravity. You also feel other little forces when the lift speeds up or slows down – a little heavier when the lift starts to move upwards, a little lighter when the lift slows before stopping at a floor. Now imagine that the lift cable snaps – you fall, until you hit the ground. Being in the lift you don’t feel the air rushing around you and there are no windows so you can’t see the walls zooming past or the floor shooting upwards, encompassing your imminent doom. This means you don’t feel any forces – you, the floor, the walls, the air around you are all being accelerated in the same way towards the ground so while you are falling you are weightless, just like you were in space. Now imaging that the building is so tall that you never hit the ground – you can’t tell the difference between free-fall and the absence of gravity. This is the principle of equivalence – an “inertial frame” (the one in space) is identical to one in freefall. In general relativity, the free-falling one is thought of as moving along a line on a curved spacetime. These two things are equivalent.

So there you have it: inertial and gravitational masses are the same because of the principle of equivalence. Except… well, the clue’s in the name, really – the principle of equivalence is just that, a principle. It works, but it doesn’t tell you WHY these things are equivalent, just that the theory works out if they are. It puts me in mind of a quote about wave-particle duality in quantum mechanics: they are completely different things which we think of as being the same. It is as if we saw a rabbit sitting in a tree. This would be pretty unusual, but is completely explained if just think of the rabbit as being a cat, in which case we’d understand its behaviour quite well. The two masses are equivalent because the theory tells them to be. This is clearly not the end of the story.

But none of this is anything too remarkable – mass and mass, gravity and acceleration — all pretty unremarkable. I congratulate you for persevering this far. The really interesting stuff comes when you start thinking about quantum mechanics and another famous Einsteinian concept-  the equivalence of energy and mass, E=mc^2.

Mass is a long-standing problem in the quantum world. On the one hand there are ongoing efforts to unify the four forces and construct a quantum description of gravity. I’m no expert here, but given that this has been THE problem in theoretical physics for the better part of 40 years and we’re still very far from testable experimental predictions, it’s safe to say that this is hard. We’ve got string theory, we’ve got quantum loop gravity, we’ve got extra dimensions and whorls in spacetime, and unfortunately we’ve got serious difficulties with diverges and suggestions based around the compactification of multiple higher dimensions. Heady stuff.

Then there’s inertial mass in the quantum world. This is the stuff that’s been making headlines of late with the likely discovery of the Higgs Boson. The Higgs mechanism is a hypothetic answer to what mass actually is. The idea is that empty space is filled with a thing called the Higgs Field. This is like a sticky soup of virtual particles which resist changes in its motion. A good analogy for this is a ping-pong ball on a string in a bucket of water (no, really). Forget for a moment that ping-pong balls float, imaging that the ball is in the waterand you pull it with the string. The water resists the motion of the ball and makes it feels heavier. It would be worse if it were in treacle. This is what the Higgs field does – it resists changes to the ball’s motion and causes an effect that’s a lot like mass.

The Higgs Boson is the particle tht ediates the interaction between the Higgs Field and particles placed in it, and you might be tempted to think that it’s got inertil mass nailed, but you’d be wrong. The Higgs field explains the mass of elementary particles, like electrons and quarks. As we all know, protons and neutrons are made of three quarks, so you’d think that their mass would be about three times the size of a quark. Quantum Mechanics being what it is, this isn’t the case. Protons and neutrons are actually about 500 times more massive than their constituent quarks. The remainder is made up from the binding energy of the quarks (the energy of the bonds connecting them together). There’s a lot of energy in there, and its contribution to the mass can be calculated via none other than E=mc^2 and gives a very accurate estimate of the mass of protons and neutrons.

There’s just one snag here: this mass isn’t coupled to the Higgs Field. The Higgs interaction couples to the Electroweak force, but quarks are bound via the Strong force. E=mc^2 tells us the amount of mass in there, but not the reason why this energy causes resistance to changes in its motion. We also don’t know the relationship between Higgs mass and gravitational mass or how all this might relate to the principal of equivalence. Put simply, we don’t know what 98% of the mass in atoms is or how it connects to gravitation. Hence my suggestion that we don’t really understand mass.

And this is also born out by a current crisis in theoretical physics. It’s easy enough to state: what would you see and what would happen to you if you fell into a black hole (and specifically, what happens when you cross the event horizon)? Until recently, the answer from most physicists would have been that you wouldn’t really notice – you be passing through empty space and the principle of equivalence lets us know that this would feel like… floating in space. After a long time you’d notice that your feet were falling faster than your head (assuming you were falling feet first) and after a very long time the difference in force would eventually tear you apart. Ow. Nasty way to go.

More recently, though, cracks have emerged in this story. They have to do with the nature of the vacuum — classically, there’s nothing there, but in the quantum world it’s a writhing, foaming sea of pair of virtual particles spontaneously coming into being and then annihilating (this is a consequence of the uncertainty principle. A completely empy vacuum would have known energy (zero) forever, which is forbidden).

At the event horizon of a black hole one of the pair will fall into the black hole, and the other will zip off to infinity. This is Hawking Radiation, the fleeing particles steal a bit of energy from the black hole and cause it to shrink slightly. Eventually it causes an isolated black hole to vanish completely.

The problem is that it doesn’t stop there. The information about the particle that falls into the back hole is destroyed when it falls in — there’s no way of discovering what the particle was by looking at the radiation coming from the black hole itself. This is a problem because a fundamental principle of Quantum Field Theory is that information is never lost — you can always recover it. A potential way out is to imagine that the pair of particles are entangled, and the one that zips off to infinity tells us about the one that falls in, but this leads to a lot of energy being released when the pair separate.

And here comes the punchline: General Relativity says you wouldn’t feel much when you fall into a black hole, quantum mechanics says you would meet a wall of fire at the event horizon. Either the principle of equivalence is wrong or the holographic principle is wrong. The two pillars of modern physics are in contradiction: at least one is going to fall, and it’s all linked to the nature of mass and the nature of the vacuum.

Heady days.

So what are people doing to get around this? A surprising amount, it appears.

There re a couple of different approaches, all of which seem to revolve around studying the quantum vacuum itself. Firstly, there are several theories which suggest that the reason why it’s so difficult to combine quantum field theory with general relativity is that gravity isn’t actually a fundametal force at all – it’s an emergent effect from the interaction of particles with the quantum vacuum. General relativity then emerges as an “effective theory” at larger scales. The sort of emergence has been observed in systems like crystals and superconductors and might lead to the origin of gravitational mass.

On the other hand, there an approach that models the quantum vacuum using classical physics (which is apparently quite good at reproducing blackbody curves) which treats inertial mass as another interaction of particles and the vacuum. This time resistance is generated by changes in motion due to exchanging energy with this classical quantum vacuum.

What’s interesting about these is how similar the approaches are: particle interacting with some model of the vacuum. They also both suggest that if we could manipulate the vacuum we could also manipulate mass, which would be very exiting indeed.

There’s also the ongoing problem with Dark Matter and Dark Energy – we don’t know what they are (especially Dark Energy) but we need them to make our models work. Wildly speculating, it’s possible that this is linked to the black hole conundrum.

For an outsider like me, though, I can’t help but think that all this is very reminiscent of the situation in physics at the end of the 19th century. We have a problem that appears to destroy all our well-established physics that we cannot solve. The way out of that one was quantum mechanics, which has done pretty well for itself and lead to new technologies that would have seemed like magic beforehand.

The way out of this will undoubtedly be exciting and revolutionary. Can’t wait to see what people come up with!

… and we’re back. I realise I haven’t posted for a couple of weeks. This is down to a nasty bout of flu which laid me very low and generally made the world seem an iller place. This week I want to talk a bit about 50Hz technology in movies, and the storm of slightly odd criticism being thrown at The Hobbit. I realise no one is terribly interested in what I think about his, but hey, when does that ever stop the average blogger?

First a bit of background. Most movies update at 24 frames per second – this is pretty well-known. Yes, yes, yes TV is often interlaced but the complete image updates 24 times per second, even if the lines are staggered. As far as I can tell, this dates back to the earliest days of cinema, when it was recognised as the lowest frame rate that gave a convincing illusion of movement rather than a sequence of still images. Like all the best standards, it’s persisted for an impressively long time.

Recently, though, a plethora of HD technologies have been foisted on an unsuspecting public. We’ve gone relatively rapidly from VHS to DVD to Blu-ray and 3D. The TV in my front room has eye-wateringly crisp definition and magnificent contrast and depth of colour. Its a joy to watch and the movies I play on it are incredibly well mastered and encoded.

All of which makes it a bit odd that by default it came with literally dozens of digital post-processing algorithms that try to reprocess poor quality video input into, well, I assume they are intended to make the picture better but in fact all the dynamic contrast enhancement, edge detection, smart vector deinterlacing and deblocking conspired to make my blu-rays look frankly ludicrous. I spent 3 weeks finding all the settings on the TV and switching everything off, then doing the same with my Blu-ray player software and again on my graphics card driver (I use an HTPC). Having finally gotten to the bottom of them all, I still find I occasionally have to repeat the process after software updates helpfully turn a few randomly back on.

Of all of these things, two are the most infuriating. The first is smart deinterlacing, which is so smart that it tries to deinterlace content that is not in fact interlaced in the first place and managed to turn 1080p into something resembling over-played VHS. Seriously, turn this one off and you’ll see a VAST improvement in picture quality. If in doubt, deactivate deinterlacing completely, you won’t regret it.

The second, however, is the real troll under the bridge: smoothing. On my TV this is called MotionPlus, but technically it’s interpolation in the time domain. From a technical standpoint this is actually a pretty obvious thing to do. All the improvements in definition I mentioned above improve the spatial resolution of the individual images in the movie: smaller dots, larger images, simply more pixels. This is great, but it’s only half the story. Because the images are animated, you could also add in extra frames in between the 24 you’ve already got each second.

And this is where is starts to get interesting, because we’re starting to talk about hings that don’t just involve the technology but also involves human perception. Throughout the history of cinema it’s been assumed that 24fps is enough to fool a person into thinking that a sequence of still images is actually a smooth flow of animation, but as I mentioned at the beginning, it was adopted because it was the slowest speed that did the job, and this does make a difference.

If you try hard, however, (and largely ignore the detail of what you’re watching) it is possible to see the transitions between frames. This is actually easier with HD stuff – the picture is so crisp that the tiny judder due to the frame rate is easier to spot. The reason we overlook this, I believe, is that all of us have grown up watching 24fps video and accepting it as such. We subconsciously ignore the fact that the illusion isn’t quite perfect.

So, enter temporal interpolation. With hardware acceleration, you can interpolate frames in real-time, and with a decent enough algorithm you can increase the apparent frame rate quite a lot. My TV’s top setting is at 200Hz. That’s pretty smooth. Newer sets are (discretely) claiming frame rates of 400Hz. That’s a hell of a lot of interpolation.

I don’t use it. Why not? because it makes beautifully directed movies with high production values look about as convincingly real as local amateur dramatics (I’m not having a go at amateur dramatics, it’s just that production values tend to be a bit cheap and cheerful. This, of course, is not why you go!). A lot of the time it’s completely unwatchable. Interestingly, this is all very similar to the criticisms leveled at The Hobbit, which was shot at 48fps, which suggests that it’s not an artifact of the interpolation but the increased temporal resolution itself that’ to blame.

But why on earth should this be? Improving the frame rate should reduce judder and Peter Jackson is quoted as saying that filming at 48fps almost eliminates motion blur, both of which should improve the viewing experience, not make things unwatchable.

So, here’s my 2 cents-worth: the effect is perceptual. I’ve already mentioned that because we’ve always watched movies at 24fps we’re conditioned to it. You could extend this argument to suggest that, subconsciously, we have a mental category for filmed material – we accept things animated in this way as fictions, separate from reality.

To develop this idea completely I need to introduce one more concept, the Uncanny Valley. The uncanny Valley is a well-documented effect in computer graphics, but it is basically perceptual. It works like this: imagine a simple cartoon, like a stick-man. We accept this a rough representation of a person. It’s not very accurate, but we accept it as being a representation of a person. If we add more detail, like a face or some feet, we accept this as well. In fact, this cartoon is slightly more convincing than before.

Adding more and more detail: more realistic shape, colouring, clothing, more nuanced behaviour and we find the illusion more and more convincing, whilst still being aware that it is a cartoon. This continues up to a point, but once a representation of a person gets very close to being completely realistic we start to reject it – the cartoon character becomes a doll-eyed automaton and the illusion is ruined.

This is the Uncanny Valley, and it’s a valley of perception. At a certain level of realism there a shift in perception, we cease to accept it as a cartoon and start to put it in the same category as actual people. At this stage different mechanisms kick in and we apply different standards: this is an illusion, our inner cave-men will not be fooled by this sorcery. Ha!

This is a difficult thing to overcome. Polar Express is often touted as a movie that suffered hugely from the uncanny valley. It attempted to be photorealistic and failed. The same was true of Final Fantasy: The Spirits Within, but isn’t true of (say) Tin-Tin, which has people who objectively look nothing like real people but are paradoxically easier to accept as such.

And this is what I think the problem is with higher frame rates – they cause us to categorise what we are seeing as real life rather than movies and as a consequence they look like actors on film sets rather than convincing illusions. Of course, with practice you can train yourself to recognise them as movies and all becomes right with the cosmos. I watched season 2 of Game of Thrones at a friend’s house on their 200Hz TV. It took seven episodes before I could look past the overt, hyper-reality of the interpolation and I’m still not completely past it but at home I just deactivate it as it only distracts from enjoying the movie.

There’s one final thing to add here. This effect is made worse by bad lighting. Interpolated content shot outside with natural lighting does grate nearly as much as poorly lit studio shots. I’m guessing this just adds to the unreality.

So that’s my thought on the subject. Perceptual shifts causing movies to look unrealistic. If this is true, I would imagine it would be an interesting challenge for a director, can you make a convincing-looking movie at 48fps? It would probably end up being spectacular.

 

Update: It seems that the situation in early cinema with regards frame rate is quite interesting. Silent movies typically had frame rates of 20-26 fps, although this was more of a function of equipment design than anything perceptual. Thomas Edison apparent insisted that 46 fps was the minimum, and that “anything less will strain the eye.” Interesting.

Furthermore, the perception of very short events is quite complex, with flashes of darkness as short as 10ms being perceptible, but flashes of light causing a “persistence of vision” effect that can cause 10ms flashes of light to appear more like 100ms long, and cause consecutive flashes of different colours to merge together so that, for example, red plus green is perceived as yellow.

So, finally back from chilly wanderings, and what better way to celebrate a marvelous time in Sweden than with some fantastic photos. We were up in a place called Bjornrike (which means “Kingdom of Bears” in Swedish. These shots are actually all from the same day, when we were skiing cross-country in afternoon. The day was crisp and cold with spectacular lighting and (as you’ll see below) an astounding sunset.

Anyway, I’ll stop prattling.

Lunchtime, towards an Arctic sun.

Lunchtime, towards an Arctic sun.

This is from the car park at the end of the day

This is from the car park at the end of the day

Bright sun and dark clouds.

Bright sun and dark clouds.

snow 7

snow 2

snow 6

snow 5

snow 4

That glorious sunset

snow 9

image

I’m still on holiday, enjoying a little Stockholm jazz. Normal service will be resumed eventually!

image

I’m in Sweden for Christmas and new year, firefly escaping the miserably wet weather at home. Until normal service is resumed, here’s a snowy scene from Stockholm.

Happy Christmas!

With the end of the Mayan Long Count Calendar today, it seems only fair to join the melee and talk a little about the end of the world, oh – excuse me: the End of the World. It at least deserves to be capitalised. First of all let me say that if you genuinely believe the world is going to end I am not out to mock you (although why you would spend your last hours on the internet reading my blog is not entirely clear). As ever, I’m just going to talk about some thoughts I’ve had on the subject and explain what I believe is a rational response to prophecies of impending doom.

But more of that later. Prophecies of impending doom seem to come around every so often, my first encounter with the idea was in about 1984 when I was 9. It was round about the end of September, a documentary on BBC2 announced that one particular prophecy had it that the end of the world was nigh. “in fact,” claimed the presenter, “very nigh: next Thursday”. That, it is safe to say, scared the 9 year old me very much indeed – so much so that I had to have the day off school the following Thursday. As I recall, we had a class outing planned to Saddlers-Wells theatre that day which I missed as a result. I was kind of looking forward to it at the time.

Needless to say, the world did not end but I learned a lesson that day about uncertainty and not believing everything you see on TV and the fact that adults don’t always necessarily know what they’re talking about. Since then predictions of the end of the world have come and gone, and I’ve developed what I believe is a sane a sensible response to them. It is as follows: You think the world is going to end on Saturday? I’ll bet you £50 it won’t. Or an equivalent quantity in your local currency.

This is based on the fact that, in the absence of evidence that a doomsday scenario is on the cards, I don’t believe that it will. If the world is still there, I’m £50 up. If the world ends, well I’m not likely to have to make good on the debt and this fact may actually be of some small comfort as the meteor slams into the Earth’s crust / the tail of the comet seeds a deadly virus / the antichrist, harbinger of the unrighteous, punishes the unrighteous.

This is the essence of Game theory. I play a certain strategy, I get a certain payoff in a certain circumstance. In this case I choose to bet. If I win the bet I am richer, if I lose the bet everyone is dead. On the other had I could choose not to bet, in which case if the world survives I am no richer but if the world ends everyone still dies. A rational agent should therefore choose to make the bet. Of course, this implies that someone is willing to bet with me, but seeing as this hypothetical person must presumably believe that the world is going to end they have nothing to lose and are probably thinking about rewarding ways to spend their last hours, in which case a bit of a wager might be fun, and the prospect of me having to admit that all my fancy book learnin’ hasn’t helped me one bit might also soften the blow a little.

I could look at this another way: way do I believe that the world isn’t going to end? You could argue that none of the previous predictions have proved true so why should this one. This, however, is very faulty reasoning. The end of the world is something that that, by definition, will only happen once. So the fact that it is still here after all those false predictions is just confirmation bias – if any of them had proved true we would not be here to ask the question.

This is where we get into some interesting questions about statistics and how we interpret probability. Classically, probability was interpreted as a frequency of events: given a large number of trials you would expect one result x percent of the time, and other y percent of the time and so on. This is called a frequentist approach, and it works well when you are dealing with lots of repeatable events, like coin tosses. If I flip a coin, I expect heads 50% of the time and tails 50% of the time (in a vanishingly small number of cases it might land on its side, but we’ll ignore that), so the probability of heads is 0.5.

This idea works well for coins tosses, but less well for the end of the world. The end of the world won’t repeat, but I would still like to attach a probability to it happening: surely on a day when there is no prophecy or threat of nuclear destruction the probability is lower than a day when there is? Here we get into the realm of Bayesean statistics. Bayesean statistics is built on the idea of condition probabilities: “given that this has happened, the probability that that will happen is…” and allows you to include prior information, such as the amount of evidence for a particular outcome. This means that I can say things like

“given the recent observation of a huge fiery comet of doom headed directly for central London, the probability of the end of the world is…”

or

“given that major changes in various numbering systems have come and gone without incident and the evidence of any effect of overflow in numbering time based on arbitrary start dates and dynamical range is thin at best, the probability of the end of the world on 21st December 2012 is not statistically different than in was on the 20th or 22nd Decamber 2012”.

So a combination of Baysean statistics, with its ability to incorporate prior knowledge into estimates of probability, and Game Theory, with its notion of strategy and expectation, I will therefore bet you £50 that the world does not end today. In fact, I’m not the first one to come to this conclusion. See you tomorrow in the 13th Long Count period!

It’s been a while since I posted anything techie, so this week I thought I’d revisit the visualiser I mentioned a while back. Before we continue, however, a short disclaimer: this post contains maths and (a little) code. The maths involves nothing more strenuous than vector cross-products, and the code is a few lines of JoGL but if those things don’t float your boat then by all means feel free to ignore the rest of this post. I won’t hold it against you.

So, that said: I’ve been doing some work adding interactivity to Seer, the visualiser for the Camino Monte-Carlo simulation. The visualiser is primarily a public engagement and demonstration tool, but it’s also pretty handy for debugging. I’ve talked about it before here and here. What the Seer does is visualise a live diffusion MRI simulation. It shows the diffusion environment and the positions of diffusing particles. It currently looks like this:

visualising walkers on a mesh

What we’ve got here is the tissue mesh I’ve talked about before with spins executing random walks rendered in red. Their positions are update by a Camino simulation running on the same mesh in a separate thread. The smaller plot in the bottom-left is a 3D scatter plot of the net displacement of each spin, also updated live. This shows the sort of data that we would measure from the simulation using a diffusion MRI pulse sequence.

Point picking & JoGL code

What we decided, though, was that it needed a bit more interactivity. Specifically we wnt to be able to reset all the spins to a location that you select with the mouse. Since I’m already using a single click to move, the right mousebutton to reset the visualisation, and the mouse wheel to zoom, I decided to go with a double-click to set spin positions.

This presents an interesting challenge, though. How do you translate the mouse coordinates (which are 2D) into a position in 3D space? Setting out I had this plan about projecting a vector into the OpenGL viewport and checking for intersections with the mesh, sorting them along the arclength and then projecting into visualisation (modelview) coords and then on to simulation (substrate) coordinates. What was quite nice, though, was that it turns out thatOpenGL, or rather the GLU API, does quite a bit of this for you.

Point picking works by taking the coordinates of the pixel you click on, converting to a position in the plane at the frot of the view frustrum (this can be done in 2D), then projecting into the scene along the current z-axis until you hit something. You then use the z-coordinate of the object as your third coordinate. This gives you a 3D point that you then project into the model coordinates via the current projection and modelview matrices. GLU provides methods to do this, specifically they’re called glReadPixels() and gluUnProject(). There’s an excellent tutorial on NeHe’s website here.

Because this is a tutorial for OpenGL in C/C++, I’ll also add my code snippet in JoGL:


public final void resetWalkerPositions(GLAutoDrawable drawable, Mesh mesh){


GL gl= drawable.getGL();


IntBuffer viewport= BufferUtil.newIntBuffer(4);
DoubleBuffer modelview= BufferUtil.newDoubleBuffer(16);
DoubleBuffer projection= BufferUtil.newDoubleBuffer(16);


gl.glGetIntegerv(GL.GL_VIEWPORT, viewport);


int winx= walkerX;
int winy= viewport.get(3)-walkerY;


FloatBuffer posZ= BufferUtil.newFloatBuffer(1);
DoubleBuffer pos= BufferUtil.newDoubleBuffer(3);


gl.glReadPixels(winx, winy, 1, 1, GL.GL_DEPTH_COMPONENT, GL.GL_FLOAT, posZ);


gl.glGetDoublev(GL.GL_MODELVIEW_MATRIX, modelview);
gl.glGetDoublev(GL.GL_PROJECTION_MATRIX, projection);


glu.gluUnProject((double)winx, (double)winy, (double)posZ.get(0), modelview, projection, viewport, pos);


// transform into substrate coords
boolean onMesh= mesh.GLtoSubstrate(pos, subsCoords);


// if the coordinates are on the mesh, reset walker positions
if(onMesh){
// tell the simulation thread to reset walker positions
SimulationThread.resetWalkers(subsCoords);
}
else{
System.err.println("coords are off-mesh. walkers not reset.");
}


}

In addition to the use of gluUnProject(), there’s one additional JoGL-specific issue here: the GL object itself. The way I’d designed the code meant that the method that catches the double-click event was nowhere near the rendering code that does the unprojection and talks to the simulation. I spent a bit of time trying to get hold of a GL object and hand it over to the event handler, but nothing I tried worked so instead I realised that all the event handler actually needed to do was to provide the mouse coordinates and instruct the render method to do the rest. So all it does is set a flag and hand over the coords via a class-level variable. That’s a theme that’s emerged a little recently: making instructions to other parts of the system via global flags rather than method calls. It works pretty well when you’ve got functionality that’s spread across different parts f the code. (I suppose I could also have used static variables but the principle is the same and this way things are more self-contained).

Planes, projection and a bit of maths

So: sorted. Well, actually no. Unfortunately The meshes that I’m clocking on have a lot of holes in them, and sometimes I was to click on a hole instead of a triangle. In this case, glUnProject() gives a point at infinity, which isn’t what I want. I want a point half way across my substrate. This means there’s a special case to catch. Fortunately, points at infinity are easy enough to catch as the coordinate will be equal to 1, but what to do once you’ve caught it?

Firstly, we need to recognise that this is essentially a projection into a plane. The plane in question bisects the substrate half way along its z-axis ans so is easily defined but in viewport coords will depend on the current view of the mesh (the modelview matrix). Given a plane ax + by + cz + d =0 , and a point \left( X, Y, Z \right) we just choose a new z-coord such that

Z' = -\frac{aX + bY +d}{c}

The tricky part is knowing what a b , c and d are. My initial thought was to back rotate into substrate coords and project into the appropriate plane, but this requires you to invert the modelview matrix, which frankly a cannot be bothered to write code to do (and in any case is an expensive operation) so I need to be working in viewport coordinates, not modelview coordinates. So then I thought I’d use the modelview matrix to rotate the plane normal but it turns out that plane normals transform with the inverse of the rotation matrix so once gain we’re back to square one.

The answer is to define the plan using three points and use the cross product to get the plane normal. Any three non-collinear points define a plane. These points transform using the modelview matrix, not the inverse, and the the components of the normal to the plane are the coefficients we want. The algebr works out like this [Cracks knuckles], [flexes fingers in the manner of concert pianist]:

\hat{\mathbf{n}}= \frac{\mathbf{n}}{|\mathbf{n}|}

\mathbf{n} = \left(\mathbf{v}_3 \times \mathbf{v}_1\right)-\left(\mathbf{v}_2 \times \mathbf{v}_1\right)-\left(\mathbf{v}_3 \times \mathbf{v}_2\right)

because

(a - b) \times (c - b) = a \times (c-b) - b \times (c-b) = (c-b) \times b - (c-b) \times a

(a - b) \times (c - b) = c \times b - b \times b - c \times a + b \times a

(a - b) \times (c - b) = c \times b - c \times a + b \times a

and we’re away. Three cross products and no matrix inversion.

I’ll call it a day there. I’ll post some code snippets once they’re done.

I was in a tile shop the other day and it got me thinking about the fourth dimension. What do you mean “what are you talking about?”, I would have thought the connection was obvious. Oh, all right then…

Anyway, I was in a tile shop looking at different tiling patterns. Many of them were very lovely and its impressive to see how a good tile shop can embrace patterns form so many different countries and cultures. This shop had a fine line in British, Spanish, Italian, Moroccan, Arabic, Greek, you name it. Bathroom designs to satisfy even the most demanding and culturally promiscuous time-traveler.

Except after a while you start to notice that all these patterns have something in common: they are all periodic. That is to say, all the patterns repeat. In maths this is known as a translational symmetry – if you slide a repeating pattern around it will eventually line up with the tiles a few feet over. This could be as simple as a checkerboard pattern or something much more complex but typically a tiling that perfectly fills the floor space available repeats (i.e. it doesn’t have any gaps in it) will be repetitive.

There’s actually a whole branch of maths that investigates tilings and symmetries. It’s provable that certain patterns will fill the plane perfectly (like squares or hexagons) and other (like pentagons or octagons) won’t, and that certain combinations of shapes (like octagons and squares) can be used together to perfectly tile the plane. It’s related to the wider study of symmetry and a field called Group Theory, which turns out to be hugely important in physics, cropping up in everything from molecular chemistry and spectroscopy to particle physics. Heady stuff.

Anyway, what’s all this got to do with the 4th dimension. Well, aside from culturally promiscuous time-travel (which I’ll cover in a separate post). The fourth dimension is link to tiling in a rather surprising way. I mentioned that tilings tend to be periodic, but does that mean that EVERY tiling of the plane has to be periodic? The idea is related to a thing called the Domino problem in maths. Specifically, given a set of shapes, is it possible to design an algorithm that will decide if they can tile the plane or not. During the 1960s, a mathematician called Hao Wang suggested that this problem was solvable if all tilings of the plane were in periodic. You’d just need to decide if your set of tiles could be arranged periodically or not and you’d know the answer. Neat, huh?

Except there’s a snag. What if there are tilings of the plane that aren’t periodic? If those existed the test would fail. In 1966 a non-periodic tiling was found: it used over 20000 different tiles (pretty hard to construct, and all the more impressive for being found before powerful computers were widespread). Then anther with 104 tiles was found. Then another with 40 and another with just 13. Finally in 1974 Roger Penrose found an aperiodic tiling that required only 2 tiles. These are pretty interesting patterns: patterns that cover the plane by don’t (quite) repeat. You can’t slide the pattern around and match it to itself, and it only contains two types of tile! It looks like this:

penrose tilingAt first it looks regular, but stare at it for a while and you can see that it doesn’t quite repeat. Also notice that this piece doesn’t form a repeating unit on its own either. It’s a pretty cool thing. What turns out to be extra interesting, though, is that tiling patterns are related to packing problems in 3D, and in particular to the arrangement of atoms in crystals. You could ask the same question here: if I want to pack a load of (spherical) atoms into a 3D space, does the pattern have to be repetitive?

Well, no. crystals are periodic, glasses are disordered arrangements of atoms. More organic forms like wood are the result of nonlinear growth processes and many rocks are made up of mixtures of crystals, glasses and dusts all mixed up together but none of these have the same properties as the Penrose Tiling. There are things that do, though: they’re called quasicrystals, and they have a whole field of research attached to them.

A quasi-crystal is a regular but non-periodic packing of atoms. Using electronic force microscopy you can actually make images of the individual atoms and the surfaces turn out to look a bit like this:

quasicrystalLook familiar? This is actually the surface of an Aluminium-Palladium-Manganese quasicrystal. Looks an awful lot like the Penrose Tiling, don’t you think?

But what about the 4th dimension? Well, it turns out that the best way to think about aperiodic tilings is to thing of them as a 3D slice through a regular 4D lattice. If the slice is parallel to one of the regular planes in 4D, it’s periodic. Any non-parallel slice is aperiodic.

What’s a 3D slice? Think of a line. A line is 1D, and you can cut it with a single point. Similarly, a square is 2D and can be cut with a line, which is 1D. A cube is 3D and can be cut with a plane, which is 2D. There’s a pattern here: an nD volume can be sliced with an (n-1)D object. So a 4D object (don’t worry about picturing it!) can be sliced with a 3D volume. Mathematically this a relatively easy thing to do, and so your 3D quasicrystal is a the 3D slice through a 4D object.

This idea of slicing through a 4D volume is something that crops up in my own work. I do a bit of work with 3D graphics. Here is turns out to be convenient to think of your 3D space containing your graphics as a slice through 4D. Why is that, you ask? Well, in 3D I want to move around and also rotate around a point. A rotation in 3D is written as a 3×3 matrix. In fact you can think of any rotation as the result of three other rotations: one about each of the x, y, and z axes. This is provable using – guess what – Group Theory.

The thing is, that once your 3×3 matrix is full of rotations (any other things like scale factors and sheers) there no room for the translations that move things around. The most natural thing to now is to make the matrix bigger to include extra elements for moving around, and this is equivalent to (guess what) using a fourth dimension. So, weirdly, 3D geometry turns out to be easier to think of in 4D. The 3D graphics in your favourite computer game are in fact a slice through a 4D space, just like the tilings and packings for the quasicrystals. The guy in the tiling shop seemed quite interested in this when I mentioned it, although I did steer clear of projective geometry and quasicrystals at the time. It was Sunday lunchtime.

One more thing before I go. I was talking to Mrs of-Science about this later on that day, and she immediately asked if I was talking about time as the 4th dimension. I’m not – all of this is about a 4th space-like dimension, and is very different from the geometry of relativity’s 4D Spacetime, which is fascinating in a completely different way and not at all like regular Euclidean space. Interestingly, though, you can add a 4th space dimension to general relativity. Kaluza ad Klein did it back in the 30s. It turns out that if you do this you get a unification of gravity and classical electromagnetism, which was the start of all the work on grand unification of the the four fundamental interactions that we hear so much about. So the fourth dimension reveal secrets here too.

Life in 4D is pretty cool, and I fully expect my bathroom to look pretty good too.