You are currently browsing the category archive for the ‘science’ category.
Mass. The stuff of stuff. Tough to move about, painful if you run into it, and a tendency for big things to pull together. Recently I’ve been pondering mass, and given the stunning work on the Higgs boson that’s going on at the moment, I thought I might share some thoughts.
So: mass. It seems a pretty intuitive concept. Push (or pull) something, and the heavier it is the more it resists the force you apply. This is encapsulated in physics in Newton’s second law () and is fundamental to classical mechanics. It’s so intuitive and well-established in physics that its easy to forget that although Newton’s laws are astoundingly accurate, useful, and powerful they don’t say anything about what mass actually is.
One thing its definitely not, is easy to explain or well-understood. For one thing, there are two concepts of mass in physics. There’s the mass I just mentioned, Inertial mass which is an object’s resistance to changes in its motion, but there is also Gravitational mass, which is the property that causes massive objects to attract each other – this is the stuff of both Newton’s law of gravitation and Einstein’s general relativity. If you think about it, these are in fact quite different things: one is the response of an object to being shoved around, the other is its tendency to attract other masses. In strict terms these are to completely different interactions: if I push something I’m interacting with it via the electromagnetic force (yes, I am. electro-chemical processes in my brain and muscles move my hands (or whatever) to an object an push against it. On the smallest scale this is mediated by the electrons in the molecules of my skin cells repelling the electrons in the atoms at the surface of whatever I’m pushing), whereas gravity is usually thought o be a separate fundamental interaction. General relativity models it as the object curving the space it sits in. Two completely different forces acting in completely different ways.
Looked at from this perspective, it almost seems to be a coincidence that both of these concepts have the same name – why call them both mass? Aren’t we conflating two different things? So here’s something worth pondering: if we measure the inertial mass of an object and then measure the gravitational mass of the same object they are exactly the same. And that, if you think about it, is pretty odd. There have been some astonishingly accurate comparisons of inertial and gravitational mass over the years, and none of them have ever detected any measurable difference between them.
So, enter Einstein. The usual explanation of the equality of inertial and gravitational mass is the Principle of Equivalence. Einstein realised that inertial and gravitational mass are connected via acceleration. Imagine you’re in space, floating, weightless. You’re not experiencing any forces, including gravity. Now imagine you’re in a lift in a very, very tall building. When the lift is stationary you feel gravity. You also feel other little forces when the lift speeds up or slows down – a little heavier when the lift starts to move upwards, a little lighter when the lift slows before stopping at a floor. Now imagine that the lift cable snaps – you fall, until you hit the ground. Being in the lift you don’t feel the air rushing around you and there are no windows so you can’t see the walls zooming past or the floor shooting upwards, encompassing your imminent doom. This means you don’t feel any forces – you, the floor, the walls, the air around you are all being accelerated in the same way towards the ground so while you are falling you are weightless, just like you were in space. Now imaging that the building is so tall that you never hit the ground – you can’t tell the difference between free-fall and the absence of gravity. This is the principle of equivalence – an “inertial frame” (the one in space) is identical to one in freefall. In general relativity, the free-falling one is thought of as moving along a line on a curved spacetime. These two things are equivalent.
So there you have it: inertial and gravitational masses are the same because of the principle of equivalence. Except… well, the clue’s in the name, really – the principle of equivalence is just that, a principle. It works, but it doesn’t tell you WHY these things are equivalent, just that the theory works out if they are. It puts me in mind of a quote about wave-particle duality in quantum mechanics: they are completely different things which we think of as being the same. It is as if we saw a rabbit sitting in a tree. This would be pretty unusual, but is completely explained if just think of the rabbit as being a cat, in which case we’d understand its behaviour quite well. The two masses are equivalent because the theory tells them to be. This is clearly not the end of the story.
But none of this is anything too remarkable – mass and mass, gravity and acceleration — all pretty unremarkable. I congratulate you for persevering this far. The really interesting stuff comes when you start thinking about quantum mechanics and another famous Einsteinian concept- the equivalence of energy and mass, .
Mass is a long-standing problem in the quantum world. On the one hand there are ongoing efforts to unify the four forces and construct a quantum description of gravity. I’m no expert here, but given that this has been THE problem in theoretical physics for the better part of 40 years and we’re still very far from testable experimental predictions, it’s safe to say that this is hard. We’ve got string theory, we’ve got quantum loop gravity, we’ve got extra dimensions and whorls in spacetime, and unfortunately we’ve got serious difficulties with diverges and suggestions based around the compactification of multiple higher dimensions. Heady stuff.
Then there’s inertial mass in the quantum world. This is the stuff that’s been making headlines of late with the likely discovery of the Higgs Boson. The Higgs mechanism is a hypothetic answer to what mass actually is. The idea is that empty space is filled with a thing called the Higgs Field. This is like a sticky soup of virtual particles which resist changes in its motion. A good analogy for this is a ping-pong ball on a string in a bucket of water (no, really). Forget for a moment that ping-pong balls float, imaging that the ball is in the waterand you pull it with the string. The water resists the motion of the ball and makes it feels heavier. It would be worse if it were in treacle. This is what the Higgs field does – it resists changes to the ball’s motion and causes an effect that’s a lot like mass.
The Higgs Boson is the particle tht ediates the interaction between the Higgs Field and particles placed in it, and you might be tempted to think that it’s got inertil mass nailed, but you’d be wrong. The Higgs field explains the mass of elementary particles, like electrons and quarks. As we all know, protons and neutrons are made of three quarks, so you’d think that their mass would be about three times the size of a quark. Quantum Mechanics being what it is, this isn’t the case. Protons and neutrons are actually about 500 times more massive than their constituent quarks. The remainder is made up from the binding energy of the quarks (the energy of the bonds connecting them together). There’s a lot of energy in there, and its contribution to the mass can be calculated via none other than and gives a very accurate estimate of the mass of protons and neutrons.
There’s just one snag here: this mass isn’t coupled to the Higgs Field. The Higgs interaction couples to the Electroweak force, but quarks are bound via the Strong force. tells us the amount of mass in there, but not the reason why this energy causes resistance to changes in its motion. We also don’t know the relationship between Higgs mass and gravitational mass or how all this might relate to the principal of equivalence. Put simply, we don’t know what 98% of the mass in atoms is or how it connects to gravitation. Hence my suggestion that we don’t really understand mass.
And this is also born out by a current crisis in theoretical physics. It’s easy enough to state: what would you see and what would happen to you if you fell into a black hole (and specifically, what happens when you cross the event horizon)? Until recently, the answer from most physicists would have been that you wouldn’t really notice – you be passing through empty space and the principle of equivalence lets us know that this would feel like… floating in space. After a long time you’d notice that your feet were falling faster than your head (assuming you were falling feet first) and after a very long time the difference in force would eventually tear you apart. Ow. Nasty way to go.
More recently, though, cracks have emerged in this story. They have to do with the nature of the vacuum — classically, there’s nothing there, but in the quantum world it’s a writhing, foaming sea of pair of virtual particles spontaneously coming into being and then annihilating (this is a consequence of the uncertainty principle. A completely empy vacuum would have known energy (zero) forever, which is forbidden).
At the event horizon of a black hole one of the pair will fall into the black hole, and the other will zip off to infinity. This is Hawking Radiation, the fleeing particles steal a bit of energy from the black hole and cause it to shrink slightly. Eventually it causes an isolated black hole to vanish completely.
The problem is that it doesn’t stop there. The information about the particle that falls into the back hole is destroyed when it falls in — there’s no way of discovering what the particle was by looking at the radiation coming from the black hole itself. This is a problem because a fundamental principle of Quantum Field Theory is that information is never lost — you can always recover it. A potential way out is to imagine that the pair of particles are entangled, and the one that zips off to infinity tells us about the one that falls in, but this leads to a lot of energy being released when the pair separate.
And here comes the punchline: General Relativity says you wouldn’t feel much when you fall into a black hole, quantum mechanics says you would meet a wall of fire at the event horizon. Either the principle of equivalence is wrong or the holographic principle is wrong. The two pillars of modern physics are in contradiction: at least one is going to fall, and it’s all linked to the nature of mass and the nature of the vacuum.
So what are people doing to get around this? A surprising amount, it appears.
There re a couple of different approaches, all of which seem to revolve around studying the quantum vacuum itself. Firstly, there are several theories which suggest that the reason why it’s so difficult to combine quantum field theory with general relativity is that gravity isn’t actually a fundametal force at all – it’s an emergent effect from the interaction of particles with the quantum vacuum. General relativity then emerges as an “effective theory” at larger scales. The sort of emergence has been observed in systems like crystals and superconductors and might lead to the origin of gravitational mass.
On the other hand, there an approach that models the quantum vacuum using classical physics (which is apparently quite good at reproducing blackbody curves) which treats inertial mass as another interaction of particles and the vacuum. This time resistance is generated by changes in motion due to exchanging energy with this classical quantum vacuum.
What’s interesting about these is how similar the approaches are: particle interacting with some model of the vacuum. They also both suggest that if we could manipulate the vacuum we could also manipulate mass, which would be very exiting indeed.
There’s also the ongoing problem with Dark Matter and Dark Energy – we don’t know what they are (especially Dark Energy) but we need them to make our models work. Wildly speculating, it’s possible that this is linked to the black hole conundrum.
For an outsider like me, though, I can’t help but think that all this is very reminiscent of the situation in physics at the end of the 19th century. We have a problem that appears to destroy all our well-established physics that we cannot solve. The way out of that one was quantum mechanics, which has done pretty well for itself and lead to new technologies that would have seemed like magic beforehand.
The way out of this will undoubtedly be exciting and revolutionary. Can’t wait to see what people come up with!
This week I’m going to talk about a little project I’ve been working on using the Raspberry Pi. For those unfortunate souls who haven’t heard a Raspberry Pi, it’s a stripped-down, ARM-powered linux box that costs about £25. Well, “box” is actually the wrong word seeing as it comes as a naked circuit board, but it’s powerful enough to run a complete Linux. The default system is a version of Debian Wheezey, called Raspbian.
These little things are pretty impressive — half a gig of RAM, ARM processor, GPU. Connectivity is also very good: two USB 2.0s, ethernet, HDMI and a load of tempting-looking little pins that someone more experienced with a soldering iron than I am could undoubtedly do wonders with. Add an SD card to act like a hard drive and you’re away.
I unwrapped one of Christmas morning, and the obvious question was “How can I really put this thing through its paces?”. As I mentioned in a post elsewhere, the obvious thing for me to try was to port over Camino, the Diffusion MRI toolkit I work on and try running some analyses. The results were pretty impressive, so this week I’m going to post instructions for what I did. This is a companion piece to my post on the Raspberry Pi website.
I’d estimate that to go from nothing to the first set of images will take y0u about 40 mins to 1 hour, depending on your confidence with the platform. The final image will take the Raspi about 2 hours to produce, but it’ll just work away by itself so you could go off and not worry about it. Also, all the commands in this post run from a terminal window, and to get the images actually on screen you should have a shell terminal open on the Raspian desktop.
My set-up is pretty vanilla. I’ve installed Raspian Wheezey using the SD-card image image from the Raspberry Pi website. That’s the first version, not the “soft float” version (this is important for numerical efficiency).
I burned this to a high-speed, 16 gig SD card – a Samsung Class 10 MB-SPABAEU. I went for a high-speed card because there’s a lot of disk-access in what I’m doing and thought this might help. Other cards should also work just as well.
I allowed the image to automatically resize the partition on the SD card, and left the memory split between CPU and GPU at the default value.
The next ingredient is the Java Runtime Environment (JRE). For the uninitiated, this is the box of tricks that allows the Raspi to run Java. Crucially, don’t use the one from RaspberryPi.org or the default download for Raspberry Pi from Oracle. These do floating-point arithmetic in software and requires the soft-float Raspian OS, which will slow things down too much to make any of this practical.
The right JRE to use is the Java 8 for ARM developers’ Preview. This implements hardware floating point and has the added advantage of being bang up-to-date (as of Feb 2013!). Accept the license agreement (assuming you don’t disagree with it, naturally) and download the zip archive. You can do this from the Raspian desktop using Midori.
One you’ve downloaded the zip archive, there are installation instructions here. Don’t worry — it isn’t difficult to do, only takes a couple of minutes. Make sure you change the path variables as they suggest, this makes things much easier further down the line.
Now we’re getting to the more interesting bits. In order to run Diffusion Image analysis, we need some software that knows how to do it. As a totally unbiased observer, I recommend the software that was written by my colleagues and I: Camino. Camino is open source, and available for free. It lives here, and the download section is here.
The two options only differ in whether you prefer a tar.gz or a tar.bz2 – the compression. You can install either of them on your raspi with
sudo apt-get install gzip
sudo apt-get install bzip2
I recommend making a folder to put Camino in
and moving the camino archive in there. From here on I’ll refer to this folder as the Camino root directory.
Installation instructions for Camino are here. You can safely ignore the first step about Java heap size, it’s not important here. Follow the Linux/Unix instructions in section 2.
Unpacking the archive should be fairly quick, building the code (the step where you type
Will take a couple of minutes, so don’t worry if it doesn’t happen immediately. You’ll get a sequence of messages saying that different commands are being build (about 30 of them). Once it’s finished, you’ll get your command prompt back.
Again, do make the recommended changed to your $PATH variable – without this things can get annoying.
One small work of warning: Camino is research software and so can be tricky to use. It don’t have a point-and-click graphical interface, but instead works with commands that you type into a shell window. I’ll guide you through them.
It also goes without saying that all of this is for demonstration purposes only.
The good bit
If you’ve got this far, you’re ready to go! To generate some images you’ll need two things: some data, and the right commands. Both of these can be found in the tutorials section of the Camino website. Specifically, the DTI tutorial.
The DTI tutorial is a detailed tutorial aimed at researchers, and running through the whole thing is not for the faint-hearted! Also, some of the techniques are a bi much for the Raspi, so instead I’ll post a sequence of commands here for you to try.
First, download the example data in section 2 and unzip the archive to the camino root directory and unpack it.
Now follow the instructions in section 3 about making a schemefile. This is a file that tells Camino about the scan sequence used to acquire the data – it’s a necessary step, but you don’t have to understand what’s in it. Camino understands it!
Next, run the data conversion step:
image2voxel -4dimage 4Ddwi_b1000.nii.gz -outputfile dwi.Bfloat
And (finally!) we can run the analysis. First, we fit a diffusion tensor:
modelfit -inputfile dwi.Bfloat -schemefile 4Ddwi_b1000_bvector.scheme
-model ldt -bgmask brain_mask.nii.gz -outputfile dt.Bdouble
The raspi will sit there for a couple of minutes after this – it’s a bit calculation for a small machine. Once your command prompt reappears, you can use what you’ve just made to make an FA map (that’s the measure of directedness)
cat dt.Bdouble | fa | voxel2image -outputroot fa -header 4Ddwi_b1000.nii.gz
and also get the tissue directions:
cat dt.Bdouble | dteig > dteig.Bdouble
The cat command just sends the data file to the command (it’s short for conCATenate). No felines involved!.
And now we’ve got enough to make our first image! Camino’s image viewer is called pdview, and we can use it to display the images we’ve just made.
pdview -inputfile dteig.Bdouble -scalarfile fa.nii.gz
This will run a little slowly, but it’ll get there! With a little patience and understanging you should see a colour FA map with directions. You should be able to use pdview to move around in the data (it’s 3D, changing the slice number will move up and down through the brain), but you’ll have to be a little patient. You can switch the angle by clicking the axial/sagital/coronal buttons at the top. You can also switch the direction lines on and off by toggling the “show vectors” box in the top left corner.
I used this program to make most of the images in the blog post, grabbing desktop images using
(stifling a giggle here? Shame on you! If you need to know more about this command, try “man scrot”…)
More advanced imaging
So, the next bit I tried was Q-ball imaging. This is a more advance technique and requires more steps, but the Raspi is well up to the challenge, so it’s definitely worth a go.
First we do a bit of pre-processing to tell Camino how complex each voxel is:
voxelclassify -inputfile dwi.Bfloat -bgthresh 200
-schemefile 4Ddwi_b1000_bvector.scheme -order 4 > dwi_VC.Bdouble
voxelclassify -inputfile dwi.Bfloat -bgthresh 200
-schemefile 4Ddwi_b1000_bvector.scheme -order 4
-ftest 1.0E-09 1.0E-01 1.0E-03 > dwi_VC.Bint
Now we generate an Q-Ball analysis matrix
qballmx -schemefile 4Ddwi_b1000_bvector.scheme > qballMatrix.Bdouble
and then run the Q-Ball analysis. This will work away for about 15 minutes, so it might be time for a cup of tea.
linrecon dwi.Bfloat 4Ddwi_b1000_bvector.scheme qballMatrix.Bdouble
-normalize -bgmask brain_mask.nii.gz > dwi_ODFs.Bdouble
Now we’re ready for the final step: creating the Q-Ball image. First, we need an FA map in a slightly different format than we currently have. We can generate that using
fa < dt.Bdouble > fa.img
Now we need to split the image into slices. We do this with the unix split command:
split -b $((112*112*(246+2)*8)) dwi_ODFs.Bdouble splitBrain/dwi_ODFs_slice split -b $((112*112*8)) fa.img splitBrain/fa_slice
The final step is to use another of Camino’s image generators to build the image. This is the really lengthy step – it will take over two hours, but trust me, it’ll work!
sfplot -inputmodel rbf -rbfpointset 246 -rbfsigma 0.2618
-xsize 112 -ysize 112 -minifigsize 20 20 -minifigseparation 2 2
-minmaxnorm -dircolcode -projection 1 -2
-backdrop splitBrain/fa_slicear < splitBrain/dwi_ODFs_slicear
When this finally finishes, you can view it using a program like imagemagick. This isn’t installed by default but it’s a great program and I highly recommend using it. You can install it in the usual way
sudo apt-get install imagemagick
(Notice there’s a ‘k’ on the end of ‘imagemagick’. The apt-get will fail if you mis-spell it (no pun intended…)
To display the image you’ve just made you’ll need to tell imagemagick what size the image is. This should have been printed out by the previous command and you can just cut and paste it as-is into you’re dislay command. If that size is, say 2586×2586, then the command to display the image is:
display -size 2586x2586 dwi_ODFs_slicear.rgb
And there you have it! complete instructions to reproduce what I did in my Raspberry Pi blog post. Hope you enjoyed it, feel free to get in touch if you want to know more, or let me know how you got on.
See you again!
I was in a tile shop the other day and it got me thinking about the fourth dimension. What do you mean “what are you talking about?”, I would have thought the connection was obvious. Oh, all right then…
Anyway, I was in a tile shop looking at different tiling patterns. Many of them were very lovely and its impressive to see how a good tile shop can embrace patterns form so many different countries and cultures. This shop had a fine line in British, Spanish, Italian, Moroccan, Arabic, Greek, you name it. Bathroom designs to satisfy even the most demanding and culturally promiscuous time-traveler.
Except after a while you start to notice that all these patterns have something in common: they are all periodic. That is to say, all the patterns repeat. In maths this is known as a translational symmetry – if you slide a repeating pattern around it will eventually line up with the tiles a few feet over. This could be as simple as a checkerboard pattern or something much more complex but typically a tiling that perfectly fills the floor space available repeats (i.e. it doesn’t have any gaps in it) will be repetitive.
There’s actually a whole branch of maths that investigates tilings and symmetries. It’s provable that certain patterns will fill the plane perfectly (like squares or hexagons) and other (like pentagons or octagons) won’t, and that certain combinations of shapes (like octagons and squares) can be used together to perfectly tile the plane. It’s related to the wider study of symmetry and a field called Group Theory, which turns out to be hugely important in physics, cropping up in everything from molecular chemistry and spectroscopy to particle physics. Heady stuff.
Anyway, what’s all this got to do with the 4th dimension. Well, aside from culturally promiscuous time-travel (which I’ll cover in a separate post). The fourth dimension is link to tiling in a rather surprising way. I mentioned that tilings tend to be periodic, but does that mean that EVERY tiling of the plane has to be periodic? The idea is related to a thing called the Domino problem in maths. Specifically, given a set of shapes, is it possible to design an algorithm that will decide if they can tile the plane or not. During the 1960s, a mathematician called Hao Wang suggested that this problem was solvable if all tilings of the plane were in periodic. You’d just need to decide if your set of tiles could be arranged periodically or not and you’d know the answer. Neat, huh?
Except there’s a snag. What if there are tilings of the plane that aren’t periodic? If those existed the test would fail. In 1966 a non-periodic tiling was found: it used over 20000 different tiles (pretty hard to construct, and all the more impressive for being found before powerful computers were widespread). Then anther with 104 tiles was found. Then another with 40 and another with just 13. Finally in 1974 Roger Penrose found an aperiodic tiling that required only 2 tiles. These are pretty interesting patterns: patterns that cover the plane by don’t (quite) repeat. You can’t slide the pattern around and match it to itself, and it only contains two types of tile! It looks like this:
At first it looks regular, but stare at it for a while and you can see that it doesn’t quite repeat. Also notice that this piece doesn’t form a repeating unit on its own either. It’s a pretty cool thing. What turns out to be extra interesting, though, is that tiling patterns are related to packing problems in 3D, and in particular to the arrangement of atoms in crystals. You could ask the same question here: if I want to pack a load of (spherical) atoms into a 3D space, does the pattern have to be repetitive?
Well, no. crystals are periodic, glasses are disordered arrangements of atoms. More organic forms like wood are the result of nonlinear growth processes and many rocks are made up of mixtures of crystals, glasses and dusts all mixed up together but none of these have the same properties as the Penrose Tiling. There are things that do, though: they’re called quasicrystals, and they have a whole field of research attached to them.
A quasi-crystal is a regular but non-periodic packing of atoms. Using electronic force microscopy you can actually make images of the individual atoms and the surfaces turn out to look a bit like this:
But what about the 4th dimension? Well, it turns out that the best way to think about aperiodic tilings is to thing of them as a 3D slice through a regular 4D lattice. If the slice is parallel to one of the regular planes in 4D, it’s periodic. Any non-parallel slice is aperiodic.
What’s a 3D slice? Think of a line. A line is 1D, and you can cut it with a single point. Similarly, a square is 2D and can be cut with a line, which is 1D. A cube is 3D and can be cut with a plane, which is 2D. There’s a pattern here: an nD volume can be sliced with an (n-1)D object. So a 4D object (don’t worry about picturing it!) can be sliced with a 3D volume. Mathematically this a relatively easy thing to do, and so your 3D quasicrystal is a the 3D slice through a 4D object.
This idea of slicing through a 4D volume is something that crops up in my own work. I do a bit of work with 3D graphics. Here is turns out to be convenient to think of your 3D space containing your graphics as a slice through 4D. Why is that, you ask? Well, in 3D I want to move around and also rotate around a point. A rotation in 3D is written as a 3×3 matrix. In fact you can think of any rotation as the result of three other rotations: one about each of the x, y, and z axes. This is provable using – guess what – Group Theory.
The thing is, that once your 3×3 matrix is full of rotations (any other things like scale factors and sheers) there no room for the translations that move things around. The most natural thing to now is to make the matrix bigger to include extra elements for moving around, and this is equivalent to (guess what) using a fourth dimension. So, weirdly, 3D geometry turns out to be easier to think of in 4D. The 3D graphics in your favourite computer game are in fact a slice through a 4D space, just like the tilings and packings for the quasicrystals. The guy in the tiling shop seemed quite interested in this when I mentioned it, although I did steer clear of projective geometry and quasicrystals at the time. It was Sunday lunchtime.
One more thing before I go. I was talking to Mrs of-Science about this later on that day, and she immediately asked if I was talking about time as the 4th dimension. I’m not – all of this is about a 4th space-like dimension, and is very different from the geometry of relativity’s 4D Spacetime, which is fascinating in a completely different way and not at all like regular Euclidean space. Interestingly, though, you can add a 4th space dimension to general relativity. Kaluza ad Klein did it back in the 30s. It turns out that if you do this you get a unification of gravity and classical electromagnetism, which was the start of all the work on grand unification of the the four fundamental interactions that we hear so much about. So the fourth dimension reveal secrets here too.
Life in 4D is pretty cool, and I fully expect my bathroom to look pretty good too.
This week there’s been a lot of activity in the group around public engagement. Firstly a deadline looms for contributions to the upcoming open day at the Centre for Medical Image Computing. Everyone at the centre is contributing to a large exhibit where interested non-specialists can find out what it is we do. I’ve been working on a poster with one of our PhD students on the simulation work I do and how we apply it to biomedical imaging. I’ve also been putting together demonstrations and software for our group’s display. That’s a bit more ongoing.
Also this week I’ve been working with a colleague at CABI on a proposal for an exhibit at a public engagement event being held by the Wellcome Trust. This one is for a stall at a scientific street fair at the Barbican next spring. We’ve come up with a few ideas to illustrated our ideas using edible brains (well, not actual brains, but edible things in the shape of brains) and presenting some of our ideas around non-invasive in vivo microscopy and disease progression. Fingers crossed on that one.
All of this is quite exciting and a nice break from the norm. Another memorable event this week was a seminar on using Bayesian inference and random graphs to estimate brain connectivity – so much of our work involves having highly technical conversations with other specialists.
“Are you sure that it’s appropriate to assume that because the adjacency matrix element is zero, the inverse covarience matrix element is also zero?”
Well, actually no now you come to mention it but… oh, one of the statisticians in the room is saying that the assumption’s safe, phew. That was a close one.
Instead, we’re trying to explain as clearly as possible why our research is interesting and worthwhile in a way that’s engaging and accessible without being patronising.
“What we’re trying to do is look at the microscopic structure of your brain without having to cut you open. This would eliminate the need to stick needles through your skull to check if there’s anything wrong.”
Come to think of it, that might be a bit too direct.
Public engagement is undoubtedly important. My research is publicly funded, and so telling the public what I’m up to is the least I can do, frankly. This is also linked to things like Open Science, whereby you should be able to download my code, repeat my experiments, and see that I’m not making up, and Open Access, whereby you should be able to read what I publish under peer-reviewed journals for free.
Science should be open and free and accessible and available. Getting knowledge out there enables people to use it in new ways. Making research interesting will (hopefully) excite the next generation of scientists and developing communication skills with interested non-specialists at the very least makes for interesting dinner conversations.
I’m all for it. Drop by if you’re in the area.