Most recent posts:
Posts are categorized as follows:
Tags3D-SIM ASAPbio ascb15 Bacteria BFP Bio-protocol BioimageXD bioRxiv blinking Brain brightness CASFISH CellProfiler CFP Channelrhodopsin Chicken Chrimson Chronos circularly permutated CLARITY clover CRISPR CSHL cy3 CyTOF DAPI definitions DiD DiL DiO DNA FISH dTomato E. coli EC EL222 emerald EoS ER ER-tracker ERK exosomes EYFP Fiji FISEB Fish (the animal) FISH-Quant FISSEQ FITC FlAsH flu Fly FM dyes folding FUNCAT gaussian fit GCaMP GFP GFPm GFPmut2 GFPmut3 glutamate sensor GRC HaloTag HHMI Janelia histones HIV(AIDS) Hoecht Icy ImageJ imaging_at_home ISCR Its broken! IVF Kinesin LanYFP laser LSSmKate LSSmOrange Lysotracker Malaria Mammalian cell maturation mCherry mCLING microvesicles mitochondria MitoTracker mKate2 mNeonGreen mRNA decay mRNA export mRNA localization MS2 mutations my pics nanobodies Nat Methods neuraminidase neurons NIH Image Nobel prize not imaging NPC oligomerization omnidirectional oocyte PALM PAmCherry peroxisomes personal experience pHluorin photoinhibition photostability phototoxicity pHTomato Pol II CTD PP7 probe design propidiume iodide QCBNet Qdot655 quantitative microscopy QY RCas9 ReaChr receptor internalization resolution RFP Rhodamine 123 RNAScope Rodent science career shutter SiNC Singer lab single cell RNA-seq single molecule SmartFlare spaghetti monster spectrum SPIM Spinach split-GFP spot analysis STORM stress granules Suntag super-resolution supercharger superfast superfolder superglo TagRFP657 tethering tetramethylrosamine tFT TGF-beta transcription translation TRICK tRNA UnaG venus Worm yeast zebra fish zygote
Archive of Green Fluorescent Blog
Help me attend a conference
Tag Archives: ImageJ
Just hears a great talk today by Pavel Tomancak from Max-Planck institute. He’s doing amazing work in systematic imaging of fly RNAome & proteome during development. Check out his website (click his name above).
He also talked about Fiji, which an open source software that is just like ImageJ, but with supporting community that develops new scripts & applications.
A most unusual “open source” development that he is very proud about is the openSPIM.
SPIM – Selective Plane Illumination Microscopy – is a microscopy method in which a laser beam sends a narrow light sheet to the specimen, and the objective is at 90 degree from the light sheet (there are SPIM developments with up to 4 objectives that can image from all 4 sides. See here). This type of microscopy is good for imaging thick live specimens (e.g. whole worm, fly or fish embryo etc..). Due to the thickness of the sample, wide field imaging will cause too much background.
So, there are good SPIM microscopes that you can buy from companies like Zeiss. but he developed the openSPIM, which is a build-it-yourself SPIM microscope, that actually fits into a suitcase (he took it to South-Africa with him to show college & high-school kids). He claims that non-specialist can construct it in one hour. All the details (parts, assembly instructions (“IKEA/Lego style”) are found at the website. The cost, he estimates, is ~40,000$ (with the camera being about half the cost). He claims that the openSPIM is comparable, in quality of images, to commercial SPIM microscopes from 5 years ago. Pretty good for standard imaging.
I like the idea.
In most microscopy images that are published in research papers, there appears a scale bar. The scale bar is like a ruler that allows you to compare sizes and distances in images from different sources. Although a scale bar is helpful for assessing by eye, many image processing programs allows you to measure distances in the image. The problem is that these measurements are in pixels. That is what I encountered when I wanted to measure certain objects in my images. How to convert from pixels to nanometers (or microns) requires a simple formula and some prior data as follows:
- Objective magnification
- Lens magnification (in some microscopes, it is possible to get extra mag of 1.25x, 1.6x or 2x.
- C mount (is usually 1x)
- Pixel size – is the actual pixel size of the camera that is attached to the microscope.
- Binning – i.e. combining a cluster of pixels to a single pixel. The common options are 1X1, 2X2 and 4X4. Binning is usually used to reduce noise, but at the expense of resolution.
Image pixel size = camera pixel size x binning / (obj. mag x lens mag x C mount)
For Cascade 512 camera (16µm/pixel on CCD), at 60x mag and 1×1 binning:
Pixel size = 16×1/(60x1x1) = 0.2667µm = 266.7nm.
Obviously, the smaller the pixel size of the camera, the better the resolution (i.e. actual pixel size in the image).
Here is a helpful table of pixel sizes (in nanometers) for some common cameras:
Note that all cameras listed here have a square pixel size (e.g. 9300×9300). Some cameras have rectangular pixel sizes (e.g. ExwaveHAD 3CCD with 6350×7400). In such cases, the length and width of the pixel sizes should be calculated separately. However, I am told that microscope-intended cameras today have only square pixels, not rectangular.
For an explanation how to add a scale bar in ImageJ, click here.
For an explanation how to add a scale bar in photoshop, click here.
In the previous post I introduced the RNA FISH protocol at the end of that we have a slide which we can image. If all goes well, that image should show “spots” of the RNA we FISHed for.
What data do we want to get from the spot analysis?
1) Number of the specific RNAs in the cell.
2) The localization of said RNA.
3) spot intensity will tell us how many RNAs are in a particular spot (especially important for transcription analysis).
The simplest most straight forward way is to manually count spots, using the “cell count” option in ImageJ program. However, not only is it very tedious, it is also wrong, because our eyes cannot easily distinguish RNA spots to “noise” of the assay.
Many labs developed computational tools to analyze such spots. The most common method is using a Gaussian fit algorithm that finds single sources of light using Airy discs calculations. I am unfamiliar with the math behind it. I just try to use the program correctly. The program I currently use was developed by Timothée Lionnet in our lab. It is yet unpublished, but is briefly described here. (Tim is helping me alot with the analysis!)
In the figure below you can see a simplification of how the program works:
Basically, I load the image to the program and choose the best z section with sharpest foci. I then use the mouse to navigate across the image to seek for spots that may show a fit. A zoomed in section of the image is shown in a small window to assist. The x, y & z profiles of a line across the zoomed-in window show the intensity along the x, y & z. What I’m looking for are nice high peaks which co-localize on the x, y & z. I then press a button that calculates the Gaussian fit which is shown as a red overlay. If I get a nice bell-shape in all three axes, I’m happy and I press “record”. I collect ~10 such good records. If there is no Gaussian fit (e.g. plot B), I move on. This can be tedious, but is usually required just once per set of images.
The next step is determining the threshold (not an easy task) and then the output of the program gives three files. One is an image file with the spots at the positions identified by the program. Another is a text file with the location (x,y,z) and intensity of each spot. Last is a file with all the parameters used by the program. The most useful to me are the xy & z parameters. I can use those to do batch analysis of all images from the same set (i.e. same experiment/imaging conditions).
So what is the problem with the threshold?
Well, we do not want to set it too low (so it will detect “noise”) but not too high so we will miss low intensity “real” spots. From what I was told, the best way to determine if I am using the correct threshold is to plot a histogram of all the spots based on their intensity, and compare plots of low and high threshold.
Here is a nice set of histograms which demonstrate the usefulness of it:
The histograms of the low threshold include a lot of low-intensity spots which are separate from the “true” spots (the green histogram at higher threshold) which shows a nice Gaussian distribution (no relation to the Gaussian fit).
A second test to the threshold setting is to do the spot analysis on cells which do not express your RNA (i.e. you should get zero signal). You never get zero signal. But the threshold should be set so that you are satisfied with the background in negative cells, compared to the positive cells. Are you satisfied with 10 spots per cell? 1 spot per cell? 0.1 spots per cell? That really depends on the experimental question.
[Edit dec. 3, 2012: after further discussion with Tim, another important issue came out: we need to take into account also the number of cells in the frame; sort of to normalize on the number of cells, to get an estimate of the number of spots per cell.
As to the background level in the negative control – I understand that there is no standard “background level” that is accepted. It really does depend only on the experimental needs, and the ratio with the positive control. For me, though the pos. ctrl has 1000’s of spots and the negative ~1 spot per cell, this background I stil consider not low enough for my purposes.]
How to improve the data?
My initial imaging was done with the 60x objective. The resolution is enough to see the single spots, and I could have several cells in each image. However, using the 100x objective improves the resolution and hence the spot analysis (particularly for spots that lie one near another). The downside is that the field is smaller so more images are needed to have data on more cells.
Second, more photons is better: increase exposure time. However, increasing too much will also increase the background, so a balance should be found.
Hybridize with more probes – the more probes you have (different probes to same RNA, not greater amount of same probe), the better the signal/noise ratio. Also, performing double-FISH with two different probe set with different dyes to the same mRNA will definitely reduce the background when doing co-localization analysis (which is not easy).
The histogram mentioned above will also be useful for later analysis of spot intensity, particularly for transcription analysis. But that will be a separate post.
[Edit: those of you who read this post when published noticed that I now removed the screen-shots of the spot analysis program and replaced them with an illustration. this has to do with future publications and/or patent applications. Sorry.]
The July issue of Nature Methods is dedicated to microscopy. Actually, to quantitative microscopy. As the editor points out, Microscopy started as a qualitative method to visualize structures and movement of single cells and organelles. With advancement of optics on the one hand, and computer programs on the other hand, microscopy is becoming much more quantitative. We can now accurately measure signal intensity; get to time resolution of milliseconds and spatial resolution of a few nanometers. We have an increasing number of tools for fluorescent microscopy that allow multiple color imaging in both fixed and live cells. However, many of the programs are still in their infancy and there is room for improvement. Another problem the editor points at is the fact that most of “bioimage informatics” tools are not so much user-friendly, especially for biologists with little or no background in physics and programing.
Last, but not least, is the fact we can use only a limited number of colors per experiment, thus limiting the number of genes or proteins that we can measure in each experiment. This is in contrast to the accumulating whole-genome/transcriptome/proteome data that is now prevalent in the literature.
So, in the next several posts I will review the articles in this special issue. I think it would be interesting.
Methods in brief section:
In this section, a brief overview of a recent paper from Erin Schuman’s lab points to a new method to assess mRNA abundance in a sample. This method, Nanostring nCounter, uses mRNAs attached to a surface, and probes that are fluorescently bar-coded (i.e. each probe has a specific color barcode). Following hybridization, one can estimate for each mRNA the signal intensity and assess mRNA level. However, this method was not used in situ to determine mRNA levels of multiple genes in the same cell.
A similar bar-coded approach, but with single cell application, is found in an article in this issue. I will discuss that in a separate post.
Tools in brief section:
Super resolution FlAsH – Fluorescein-arsenical helix binder (FlAsH) is a small fluorescent molecule that binds to tetra-cysteine structures in proteins. However, this dye proved to be both toxic to certain cell types and have a high fluorescent background. Here, they mention a work from the lab of Christophe Zimmer, which used FIAsH and super-resolution microscopy to look at HIV intracellular complexes at ~30nm resolution. Using UV light (405nm) in short pulses, instead of the standard blue (488nm) excitation light for FIAsH, they were able to show that the FIAsH-tetracysteine complex is photostable and can fluctuate from dark to bright states, leading to low background and high resolution.
News & Views section
Faster and more versatile tools for super-resolution fluorescence microscopy. In this short review, Alex Small discusses two computational methods, published in this issue, to calculate fluorophore position in single molecule or super resolution microscopy.
Omnidirectional microscopy. Here, Weber & Huisken discuss two papers published in this issue that deal with in toto imaging of tissues, organs or embryos.
The commentary section contains three articles which discuss why bioimage informatics is important; what are the current challenges in creating open-source software for image analysis; and a call to make such software more users friendly.
I know that bioimage analysis software is important to advance the field; I get why open-source would be a better, but more challenging choice for software development; and I am truly and completely agree that these programs should be user friendly, to biologists like me. I mean, I don’t even know how to use all of the capabilities of PowerPoint, let alone programs like ImageJ or Metamorph, which are still relatively friendly compared to other stuff. I certainly do not know how to write plugins or applications and such.
Speaking of ImageJ, there is a historical commentary: 25 years from NIH Image to ImageJ software. This would probably interests programmers more than biologists.
Following that, there are three Perspective articles about other image analysis software: Fiji, BioImageXD, and Icy. I tried to read some of this stuff, but it’s really more for people with programming experience, I guess.
A review paper summarizes the available bioimage tools for acquisition, storage and analysis. Unlike the commentary and perspective sections, this review is readable by a biologist like me. In fact, I would even recommend it, particularly if you are new to the field, and planning to acquire such software.
Well, that’s about it for this post. In the next post I will review the “Brief communications” section, which contains seven short papers. A third post will be devoted to reviewing the “Articles” section, with four interesting papers.
Lelek M, Di Nunzio F, Henriques R, Charneau P, Arhel N, & Zimmer C (2012). Superresolution imaging of HIV in infected cells with FlAsH-PALM. Proceedings of the National Academy of Sciences of the United States of America, 109 (22), 8564-9 PMID: 22586087
Small A (2012). Faster and more versatile tools for super-resolution fluorescence microscopy. Nature methods, 9 (7), 655-6 PMID: 22743767
Weber M, & Huisken J (2012). Omnidirectional microscopy. Nature methods, 9 (7), 656-7 PMID: 22743768
Myers G (2012). Why bioimage informatics matters. Nature methods, 9 (7), 659-60 PMID: 22743769
Cardona A, & Tomancak P (2012). Current challenges in open-source bioimage informatics. Nature methods, 9 (7), 661-5 PMID: 22743770
Carpenter AE, Kamentsky L, & Eliceiri KW (2012). A call for bioimaging software usability. Nature methods, 9 (7), 666-70 PMID: 22743771
Caroline A Schneider, Wayne S Rasband, & Kevin W Eliceiri (2012). NIH Image to ImageJ: 25 years of image analysis Nature Methods, 9, 671-675 DOI: 10.1038/nmeth.2089
Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez JY, White DJ, Hartenstein V, Eliceiri K, Tomancak P, & Cardona A (2012). Fiji: an open-source platform for biological-image analysis. Nature methods, 9 (7), 676-82 PMID: 22743772
Kankaanpää P, Paavolainen L, Tiitta S, Karjalainen M, Päivärinne J, Nieminen J, Marjomäki V, Heino J, & White DJ (2012). BioImageXD: an open, general-purpose and high-throughput image-processing platform. Nature methods, 9 (7), 683-9 PMID: 22743773
de Chaumont F, Dallongeville S, Chenouard N, Hervé N, Pop S, Provoost T, Meas-Yedid V, Pankajakshan P, Lecomte T, Le Montagner Y, Lagache T, Dufour A, & Olivo-Marin JC (2012). Icy: an open bioimage informatics platform for extended reproducible research. Nature methods, 9 (7), 690-6 PMID: 22743774
Eliceiri KW, Berthold MR, Goldberg IG, Ibáñez L, Manjunath BS, Martone ME, Murphy RF, Peng H, Plant AL, Roysam B, Stuurmann N, Swedlow JR, Tomancak P, & Carpenter AE (2012). Biological imaging software tools. Nature methods, 9 (7), 697-710 PMID: 22743775
Nowadays, confocal microscopy is possibly the most widely used optical method in biological research. This methods creates better (and prettier) images than widefield microscopy (whether transmitted light or epifluorescence). The main advantages of confocal vs. widefield microscopy is the elimination of out-of-focus glare (thus increasing resolution and increasing signal-to-noise ratio) and the ability to collect serial optical sections of the specimen (z-sections).
The basic configuration of the optics is similar to that of the epifluorescnece microscope. The addition that created the confocal microscope, invented by Marvin Minsky in 1955, was to add two pinholes. The light produced by lamp (or laser) passes through the first pinhole on the way to the specimen. The light that is reflected (bright light) or emitted (fluorescent light) from the specimen passes through a second pinhole on the way to the detector (eyepiece, camera or any recording device). The two pinholes have the same focus – thus they are confocal. The light from other focal planes cannot go through the second pinhole, and this reduces the background “glare” of out-of-focus fluorescence seen in epifluorescence widefield microscopes.
Since biological samples usually have thickness of a few microns at least, one can get an image of a thin slice of the sample (e.g. 0.1 µm) without physically slicing the sample (optical section). We can then move the focus along the Z axis to get clear images of up or down sections. Thus, for a cell 3µm thick, we can have 30 hi-resolution images 0.1 µm thick from bottom to top (z-sections). These images can then be stacked one on top of the other (z-stacking) to create a single 2D image or to reconstruct a 3D image of the sample.
Here’s an example from an experiment I did last week (note that this is a widefield, not confocal microscope):
Above is a composite image of 31 Z sections of U2OS cells, create by the ImageJ program. The”pseudo-blue” represents the blue fluorescnce of a dye called DAPI (4′,6-diamidino-2-phenylindole) which intercalates into DNA, and is therefore a popular nuclear dye. When bound to DNA, it is excited by UV light (peak at 358nm) and emits blue/cyan light (peak at 461nm). The “pseudo-red” color represents the fluorescence of mCherry-ZBP1 fusion protein. mCherry is an RFP.
You can see in the image that the first and last few images in the series are out of focus. You can therefore choose the best or sharpest Z-section according to your needs.
However, most people do not show a single Z section since then we miss a lot of information that is found in other sections. The available programs today allow “stacking” the section to create a projection of all the sections into a single image.
Here is the maximum projection of the Z sections shown above:
Maximum projection means that the algorithm chooses, for each pixel, the highest value found in any of the 31 Z sections. However, since we chose all 31 sections, we can still see a “glare” or halo. This is a result of the “halos” from the out of focus sections.
I therefore choose only a few sections to create the next image:
This image is now sharper and better looking.
The program allows you other options besides maximu projection: you can choose minimum, average, median, and even standrad deviation, seen in the next image (DAPI channel only):
It looks very cool. I stacked the entire 31 sections (of a differnt field), so you can see the halo from the out-of-focus sections sorounding the “black” rim of the nucleus (black since it has the minimal standrd deviation value for all the images). The blue zones, with high SD, suggest a larger differnce in fluorescence between the differnt sections.
Above, I mentions that the blue and red are pseudo colors. What actually happened was that the images I took with the microscope at each channel (range of wavelengths) is actually maintained as a greyscale image.
Using the image analysis program you can then merge the images of the differnt channels (up to 4 in ImageJ) to create a color image. When creating the merged image, you determine what color to assign to each channel. Here is the same image, but with the colors reversed:
You should take that into account when you see pretty pictures in sceintific journals.
The program also allows to creade 3D representations of your Z stack. but I haven’t learned how to do that.
There are many other tools that one can use with the image analysis program besides creating the image. One important feture is the ability to measure the intensity of the fluorescent signal (actually, the pixels) in certain areas within the cell. You can measure distances and angles between objects and probably many moer that I still have to learn.