You can see light’s presence everywhere, but have you ever seen it moving? Due to the speed of light being the fastest thing we currently know of it’s a rather elusive beast to see in motion, especially on the scale we exist in, and whilst it might look instantaneous it does have a finite speed. Whilst we’ve done many experiments in slowing light down and even trapping it for short periods of time but being able to watch a light ray propagate was out of our reach for quite some time, that was until the recent development of a couple technologies.
The above video is the work of Ramesh Raskar and his team at MIT which produced a camera that’s capable of capturing 1 trillion frames per second. However it’s not a camera in the traditional sense as the way it captures images is really unique, not at all like your traditional camera. Most cameras these days are CCD based and capture an image of the whole scene then read it off line by line and store it for later viewing. The MIT system makes use of a streak camera which is only capable of capturing a line a single pixel high, essentially producing a one dimensional image. The trick here is that they’re taking a picture of a static scene and doing it multiple times over, repositioning the capture area each time in order to build up an image of the scene. As you can imagine this takes a considerable amount of time and whilst there are some incredible images/movies created as a result the conditions and equipment required to do so aren’t exactly commodity.
There are alternatives however as some intrepid hackers have demonstrated.
Instead of using the extremely expensive streak camera and titanium sapphire laser their system instead utilizes a time of flight camera coupled with a modulated light source. From reading their SIGGRAPH submission it appears that their system captures an image of the whole scene and so to create the light flight movies they change when the light source fires and when the camera takes the picture. This process allows them to capture a movie much quicker than MIT’s solution and with hardware that is a fraction of the cost. The resolution of the system appears to be lower, I.E I can’t make out light wave propagation like you can in the MIT video, but for a solution that’s less than 1% of the cost I can’t say I fault them.
Their paper also states they’re being somewhat cautious with their hardware, running it at only 1% of its duty cycle currently. The reason for this is a lack of active cooling on their key components and they didn’t want to stress them too much. With the addition of some active cooling, which could be done for a very small cost, they believe they could significantly ramp up the duty cycle, dropping the capture time down to a couple seconds. That’s really impressive and I’m sure there’s even more optimizations that could be made to improve the other aspects of their system.
It’s one thing to see a major scientific breakthrough come from a major research lab but it’s incredible to see the same experiments reproduced for a fraction of the cost by others. Whilst this won’t be leading to anything for the general public anytime soon it does open up paths for some really intriguing research, especially when the cost can be brought down to such a low level. It’s things like this that keep me so interested and excited about all the research that’s being done around the world and what the future holds for us all.