skip to main content

My paper on image correction comes out in Nature Methods in June1. Sorry for the paywall; this is an unfortunate consequence of working at a major research university. I'm not allowed to post pre-print versions of the paper until 6 months after publication, but I will definitely do so as soon as I can! [Update: The paper is now freely available.]

About a year ago, one of our microscopes suddenly had a filter that got out of whack (a technical term) and added a huge intensity gradient to our images. We had been using the rolling ball algorithm (as implemented by ImageJ) to correct milder forms of such gradients, but I noticed that this method was not sufficient to correct the steep gradient. But it seemed to me that it shouldn't matter how step the gradient was, since it has a predictable behavior it should be completely correct-able.

I hadn't really thought about the image correction step at all until this point, and so I dove into the literature to figure out what others were doing in high throughput microscopy. It turned out that everyone was treating this step as if it either didn't matter or was a solved problem, and so most methods sections simply said, "we applied uneven illumination correction and background subtraction" (meaning they removed the brightness gradient) without any explanation. It turns out that there are a lot of variant methods for each of these processes. Some of these are flat-out wrong, others are necessarily inaccurate, and still others have an accuracy that is highly dependent on properties of the image. Because published imaging papers did not say how they did their correction, then, to be a good scientist I have to assume that they are doing it at least partially incorrectly (first rule of doing science: trust no one). Indeed, in some "representative images" (in biology, this is code for "the best images we could find") of published works the inaccuracy of correction is visibly apparent.

And so, long story short, I compiled a list of the methods to figure out which ones did or did not work (and why) and then modified the more accurate approaches to make an image correction method appropriate for "high-throughput" microscopy (which can generate thousands of images per hour). A key observation here was that the gradient varied from image to image, which was surprising because it should have been due to an optical property of the microscope (which would not change between images). The only things that change from image to image are position within the sample, and so I inferred that sample position might modify the otherwise-static shading pattern generated by the microscope. This turned out to be the case, which is great because that pattern is then completely repeatable and so can be accurately corrected with reference images.

I'm writing up my dissertation right now, and will have an extensive chapter on this topic. I'll have the thing posted somewhere after my defense, and add a link to this post for those who wish to get more detail on the problem of fluorescence image background correction.

Footnotes

  1. Coster, Adam D et al. “A simple image correction method for high-throughput microscopy.” Nature methods vol. 11,6 (2014): 602. doi:10.1038/nmeth.2971