SEM NOTES #2

Scanning Confocal Microscopy

In recent years a new type of microscopy has become popular that utilizes the principle of a raster or "scanning" pattern to image a specimen. These microscopes are called scanning confocal microscopes or SCMs. The term confocal means "having the same focus" and in practice this refers to two lenses aligned so as to be focused at an identical point in space. There are two radically different designs presently employed to achieve this phenomenon of confocality. They differ principally in the manner in which the raster pattern is established on the sample. Each type of design has advantages and disadvantages over the other and we will discuss each separately.

Like the SEM the confocal microscope differs from standard microscopes in that it does not function as an optical microscope but rather as a probe forming/signal detecting instrument. Thus despite the fact that it relies on conventional light optic and glass lenses to function it is as different from the standard U.V. microscope it is mounted on as the SEM is from a TEM. Like the SEM the confocal microscope builds its image in a point by point manner based on the signal strength reaching the detector from the specimen. To do this one must create a point scanning or raster pattern on the specimen.

As the name implies a confocal microscope acts by scanning its illumination source over the specimen and some mechanism must be created for establishing this raster pattern. There are only two ways to achieve this. One must either physically deflect the illumination source to create a raster pattern or leave the beam stationary and move the sample in a raster pattern. Some of the earliest confocals used the scanning specimen technique in which a very small sample was place on the end of a piezo-electric device which could then be rapidly shifted by controlling the current going to the piezo-electric. Optically this is the most stable design for a confocal microscope but has the severe limitation that only very small and stable specimens can be examined. Certainly no living or wet specimens would work nor would any large or heavy specimens.

The primary difference between a confocal microscope and a conventional wide field microscope is the increased logitudinal resolution of the confocal microscope. Longitudinal resolution is defined as the abilty to resolve objects in the optical axis or "Z-plane". This is in contrast with the transverse resolution which is the ability to resolve in the "X-Y plane" of a flat field. Each can be defined by the following equations:

Transverse resolution = 0.6 / NA

Longitudinal resolution = 2 / NA2

Where = wavelength of illumination and NA = numerical aperture of the objective lens. Thus as the NA increases, resolution in both the longitudinal and transverse dimensions increases. LR and TR can be compared at a given wavelength by reducing the equations to:

LR/TR = 4/ 1.22 NA

But even under the best conditions (NA = 1.4) the LR will be about twice the TR.

Confocal imaging greatly improves our ability to resolve in the LR, almost to the point where it equals the TR.

Scanning Aperture Disk:

As the name implies the scanning aperture disk type of confocal microscope uses a perforated disk ("Nipkow" disk) to establish the raster pattern on the sample. Spinning at high speed, the disk is made up of a series of tiny holes that are arranged in a very precise pattern. In one type of scanning aperture disk confocal microscope known as the tandem scanning microscope, the illumination enters from above the aperture disk and proceeds towards the sample. As it does this the beam becomes highly attenuated and is imaged as a very small, nearly diffraction limited point in the focal plane. Passing through a beam splitter the illumination source focused by the objective lens of the microscope and brought to focus in a single focal plane. The reflected light from this single point is then reflected back through the objective lens, split by the beam splitter, and reflected back through a corresponding aperture in the aperture disk. Thus only light that is brought to focus at the single point in that single focal plane is capable of being reflected back to the viewer. All other extraneous signal is eliminated from the image. the perforations in the aperture disk therefore act as both point source apertures and point detector apertures.

[Diagram]

The aperture disk is constantly spinning and as light passes through the individual apertures a raster pattern is established on the sample. Each of these points lie in the same focal plane thus the image acquired represents all those reflected points that lie in a single image plane. By imaging a series of individual focal planes a "through focus series" or "optical sectioning series" can be produced. The focal plane is determined by the fixed strength of the glass objective lens. Thus the only way to change the focal plane is to change the distance between the sample and the objective lens and this is achieved either with a conventional stage adjuster or more precisely by a piezo-electric stage height controller. Because the aperture disk rotates at high speed a near real time image of the sample is produced. A modification of this design involves a shifting slit aperture rather than a point apertures. This system sacrifices resolution for increased illumination.

Laser Scanning Microscope:

The second type of scanning confocal microscope achieves essentially the same result by a very different method. In a laser scanning confocal microscope the illumination source passes through a beam splitter and is moved in a raster pattern by a set of pivoting mirrors. These mirrors cause the beam to move in a an X and Y pattern similar to what occurs in the SEM. The beam then passes through a tube lens and then an objective lens which focuses it on the specimen. This produces a diffraction limited light spot which achieves its minimum size only in one plane of the specimen. Traveling via the objective lens, tube lens, and scanners the reflected light is directed to a beam splitter. Another lens then focuses the reflected beam so that only illumination from the one point in the focal plane is brought to focus at a point corresponding to the pinhole diaphragm or aperture. The transmitted light is then detected by a photomultiplier tube and the resulting signal is represented by points on a CRT.

[Diagram]

A second type of laser scanning confocal microscope known as the Odyssey SCM has recently been developed by NORAN Inc. Rather than deflecting the laser illumination by mechanical mirrors the Odyssey uses an acousto-optic deflector (AOD). An AOD is a glass body which by way of multiple transducers can establish sound waves in the glass. These closely spaced waves can perform much like physical grooves in a diffraction grating and by changing the frequency of the signal driving the AOD transducers the beam can be deflected in the x-axis at very high speed. Coupled with a mirror deflector in the y-axis, the AOD deflected laser can scan a sample (512 X 480) at 30 frames per second (7 times faster than a dual mirror system). Also by varying the amplitude of the AOD transducers the intensity of the laser can be continuously varied. Another difference between the AOD and other laser SCMs is that it employs a variable final slit rather than an aperture. Thus true confocality is only achieved in one axis rather than two.

All types of scanning confocal microscopes can thus acquire images from a single focal plane. By eliminating extraneous signal from above and below the an image is produced that is significantly improved in contrast and resolution over conventional light microscopes. Furthermore, because images can be acquired as single planes these can be individually stored, manipulated and recombined to produce three dimensional representations of serial optical sections. Ignoring factors such as cost, the different systems have pluses and minuses. The spinning aperture disk microscopes can image in near real time, use a variety of primary excitation illuminations ranging from white light to true U.V. While these features are clearly advantageous the aperture systems suffer from a great deal of loss of signal (due to passing through two small apertures) and an inability to change the raster pattern which is fixed by the distribution of holes in the aperture disk. In contrast the laser scanning microscopes can alter the scan pattern and thereby magnify the image by upwards of a factor of eight. Also, because the signal is attenuated by only one aperture and can be enhanced by the very sensitive photomultiplier tube reflected light that is very low in strength can still be seen on a laser confocal microscope. One major drawback is that because the raster pattern is established by physically moving mirrors, real time imaging is impossible. The best that can be achieved now is four seconds per frame. Also because a laser must be used as the primary beam source true colors, and true U.V. fluorochromes can not be visualized on a laser scanning microscope. Another disadvantage is that the operator can only see the resultant image as it is presented on the CRT whereas with the aperture disk systems the operator can either look through the eyepiece or record using a TV camera and CRT.

Stereo Pairs:

Although the micrographs produced by the SEM appear to be in three dimensions, this is actually not the case but is simply a result of the great depth of field offered by the SEM. However one can take advantage of this depth of field to produce true stereo micrographs using the SEM. In order to achieve a stereo view the same object must be viewed from slightly different angles. A one eyed person, single lens camera, or single micrograph cannot produce this effect. When the same object is viewed separately by each eye, the regions of overlap can be fused by the brain and great spatial information can be gained. In doing this in the SEM certain factors must be taken into account.

Stereo micrographs are produced in the SEM by taking two separate micrographs of the same object from different angles. Depending on the magnification a tilt difference of between 5 degrees and 10 degrees is usually ideal. This can be accomplished in one of two ways. First, the specimen can be physically tilted, recentered, and refocussed at an angle different from the previous micrograph. Care should be take to refocus the image using specimen height since large changes in the strength of the final lens current will produce a micrograph of a different magnification and of a slightly different rotation. Alternatively, some SEMs offer the capability of angling the incident beam. This has the advantage of not having to translate the specimen but also it is possible to rapidly switch back and forth between incident beam angles. Using this technique real-time stereo imaging can be done in the TV. mode.

In order to create a stereo pair from two separate micrographs certain rules must be followed. First, the images must be aligned in the proper fashion, otherwise regions that are actually peaks will appear as valleys. To do this a certain convention is followed in which the lower or less tilted micrograph is viewed by the left eye while the more tilted micrograph (positive tilt) is viewed by the left eye. Next the images should be arranged so that the tilt axis runs parallel to the interocular plane. Finally, the center to center spacing of each object must be carefully positioned so that the two images can be easily fused. This is accomplished either by looking at each micrograph separately with each eye (a difficult trick to master) or by using a stereo viewer or glasses. Total magnification is also a concern here as for if the micrographs are too big the center to center spacing will be so large that the images cannot be fused together.

Although the separate images are the result of different shadows created by interaction of the beam with the specimen we cannot create a stereo view simply by having two detectors separated by a few degrees. The beam must actually strike the specimen from different angles. This is done either by tilting the specimen or, as Leica does it, by shifting the beam slightly to strike the specimen from slightly different angles.

The conventional way of displaying a three dimensional image is as a side by side stereo pair. This can be done for either color or black and white images. A second method is to display each black and white image as either a blue or red image. This is known as an anaglyph projection. When viewed with red and blue filtered glasses each eye sees only one image and a stereo view is formed. Alternatively the images can be projected through polarized filters and then viewed with polarized glasses. This requires two projectors each with a polarized projection lens, careful alignment of the images, and a special "lenticular" screen.

Electronic Manipulation of Images:

Inverse signal: Normally the viewing CRT converts a large amount of signal (eg. many secondary electrons) into a bright spot on the screen. At times it may be advantageous to reverse this signal so that areas of high signal appear dark and vice versa. To do this a simple switch on the console flips the output signal around the median grey level and an inversed image (reversed, negative image) is formed.

Gamma: Due to the linear nature in which the signal from the secondary detector is received, information from very bright or very dark regions is often lost in the final image. An electronic function known as "gamma" can convert this incoming signal from its linear form into a logarithmic function. This allows the operator to utilize the incoming signal from the detector to visualize information in the black and white extremes of the range.

Tilt Compensation: Although it may be very beneficial and desirable to tilt a specimen in the SEM, this can also distort our view of what the object truly looks like. Factors such as the distance between two structures would not be accurate unless the specimen was being looked at from directly above. To compensate for this an electronic manipulation known as tilt compensation is used. Tilt compensation alters the scan pattern on the sample in such way as to negate the effect of tilting the sample. By reducing the the length of the scan perpendicular to the tilt axis it artificially stretches the image in an equal and opposite manner to the angle of tilt. Whenever tilt compensation is used the operator should in some way make note of it, for while spatial measurements may be preserved and accurate, the image is in fact exaggerated in the one plane.

Dynamic Focus: Another feature that is sacrificed when one tilts a sample a great deal is the loss of depth of field. Because the SEM can only bring into focus those objects that lie close to the plane of optimum focus regions of a specimen that lie outside of this zone because of tilt will be fuzzy in appearance. To correct for this the strength of the final lens can be varied as it sweeps through the scan pattern and over the specimen. In this way the entire sample can be brought into focus without significantly sacrificing resolution.

[Figure 4.12 Gold]

Electronic Manipulation of Images Cont'd:

Raster Rotation:

The scan generator establishes the raster pattern by varying the current to the opposing scan coils in such a way that the beam follows a point by point and line by line pattern. Although this is usually by varying the current in one pair of coils at a time (i.e. all the points in one scan line are done before the beam moves down to the next line) this is not a requirement of the system. The scan pattern can be rotated to any position within the plane defined by the X and Y scan coils with the starting point having a both different X and Y values from the next point. By varying the position of the scan pattern relative to the sample the operator can "rotate" the image on the CRT so that a pleasing orientation is achieved. It must be kept in mind however that raster rotation is not equivalent to rotating the sample using the stage rotation which changes the specimen's position relative to the beam and detectors.

[Illustrate]

Image Processing

In order to make an image more useful we often employ a type of image processing. Basically there are three types of image processing and these can be defined as Optical processing, Analog Processing, and Digital Processing. We have all had experience with optical processing. By using the glass lenses of an enlarger to focus and magnify a negative we are practicing a type of optical processing. We are changing the original data contained in the image. Such things as burning and dodging a negative during the exposure process and altering the brightness and contrast by choosing different exposure conditions and type of photographic paper can all be thought of as optical image processing. This is the oldest form of image modification.

Analog processing requires that the image be manipulated through electronic means. Most of us have also practiced this type of image processing. The image on a television screen is controlled by the voltage signal that the electronic gun at the back of the CRT receives. By electronically altering this signal we alter the final displayed image. Changing the amplitude of the signal (difference between the highest and lowest point) will affect what we refer to as the contrast (difference between black and white). Altering the overall strength of the signal will influence the brightness of the image. The important thing to note about analog processing is that all of the components that go into making the image are all altered.

Finally there is digital image processing. In digital processing the image is represented by a series of picture elements or "pixels." Each pixel has a discrete position in the image and a defined intensity value. The pixel's position and intensity can be represented by numerical values. With today's high speed computers we can now manipulate each of these numerical values in a number of ways. We will talk about some of these possible manipulations.

Image Capture for Image Processing:

Traditionally there were only two ways to share the data generated on an electron microscope with other researchers. These were to actually have the researcher looking into the same microscope as you or to take a high quality photograph of the sample and make publication quality prints from the negative. With this negative a skilled microscopist could produce a high quality print that emphasized that portion of the image that he or she considered important. While this is still the primary mode of data dissemination used by microscopists photography is fast becoming an archaic practice. Today, image capture and image processing are fast replacing photographic methods as a way to share electron microscope images. A classic example of this is the replacement of 8mm movie cameras by VCR camcorders in nearly every American home. Video is cheaper, does not require processing, reusable, captures images and sound and is easier to view at home.

The reason that film can be replaced and the reasons for doing so lie in advances that have been made in two fields. The first is in the field of electronics. The second is in the field of computers and computer software. Together, these two allow a researcher to easily handle and manipulate images that five years ago could only be done using very sophisticated and very expensive hardware.

All of this is possible because of two things. First the human eye can distinguish 256 different levels of grey. Second, every image can be broken down into a series of small grey dots each of which is defined by one of these 256 grey levels. This process of turning a continuous tone image into one made up of pixels is known as "digitizing" an image and the resultant image is said to be digitized or "pixelated." This is essentially how black and white photographs are reproduced in newspapers. Take a close look at a newspaper photograph and you will see that it is simply a series of black dots of various sizes (i.e. intensities). A black and white photograph is essentially the same thing (a series of black silver grain dots) the primary difference being the size of the dots and spacings between them.

Twenty years ago no electronic device (TV monitor, Image printer, etc.) could come close to the particle size and spacing of photographic paper. These early attempts were crude and the large size of the spots resulted in what is called a "grainy" image. The resolution of the human eye is about 0.2 mm and so any two dots that are farther apart than 0.2 mm can be seen as single dots or "grains." The resolution of a digitized image is therefore partially dependent on the number of pixels per unit area. The higher number of points per unit area the greater the resolution. Since the digitized image can be represented as a matrix of pixels it's dimensions are given in terms of the number of pixels present. A high resolution digitized is usually defined as having a point density of 512 X 512 or greater. For reasons discussed later (e.g. loss of data points due to post processing) it is always desirable to collect the data in as high a resolution manner as possible. Even if your output device is not of high enough resolution to take full advantage of the data set a denser digitized image gives you more more latitude.

Example: The same image can is represented by four digitized matrices which differ in terms of their spatial resolution: 256 X 256, 128 X 128, 64 X 64, 32 X 32. The distance from the observer directly affects how the spatial resolution of the image influences how it is perceived. When viewed from a distance all four of these images appear nearly identical but when seen up close they are radically different.

[Fig. 3-4]

Image Processing

Contrast:

The eye's ability to detect grey levels is intimately linked to what we call contrast. Contrast refers to the distribution of brightness in an image. A high contrast image is composed primarily of dark black and bright white and has a quality of intense boldness to it. In contrast a low contrast image has only middle grey tones present and appears washed out. An image with good contrast should have all 256 grey levels represented somewhere in the image reflecting the natural distribution all the way from black to white. This is important not just from the standpoint of aesthetics (i.e. creating a "pleasing" picture) but also in terms of information. A picture that is too high in contrast will result in a loss of image detail in those regions where there is a subtle but important change in image brightness. Likewise, an image that is too low in contrast may not reveal image detail because the differences that the eye could normally detect are not visible. Because a digital image can in fact be broken down into at least 256 grey levels (in some cases even more) and each of these can be manipulated separately we can "enhance" or modify image contrast in very specific ways. Increasing the image contrast would involve taking a digital image of limited grey values and expanding the differences between them.

Example: A digital image is composed of pixels that range in grey value from 100 to 160. A simple and quick calculation could be made that first subdivides the group around the middle value of 128 (= 1/2 of 256). All pixels of a 128 remain unchanged. Those of 127 are changed to 126 while those of 129 become 130. In the next step all of the original pixels of intensity 126 become 124 while those originally of 130 become 132. The process continues in this manner (new value = original value -(or +) (N) where N = steps from 128). By this process the new image would have a expanded grey scale that now ranges from 72 to 192. Still not perfect but much improved.

Likewise, reducing the contrast might involve artificially changing the grey level value between pixels to spread out the tone range.

Example: A single line of a digital image has the pixel values 0, 0, 0, 120, 120, 255, 255, 255, 255. If we were to plot these values on a Brightness/Position curve it would look like this.

We could alter the value of these pixels to the following string:

0, 40, 80, 120, 140, 180, 240, 255, 255.

While this may give us a more pleasing final image it is important to remember that we have essentially "created" data for these pixel points. It is always best to collect the original data image in as near to the "perfect" contrast balance as possible, but since this is often difficult to do it is better to err on the side of slightly too little contrast than too much. It is easier to artificially add a little contrast than it is to subtract contrast. This is true not only for digital processing but for optical processing as well. There is always a danger when collecting a contrasty image that important information (represented by subtle changes in grey level) will be lost.

Ex: Contrast stretch 170 X 1.5 = 255 so if the highest value in an image is 170, simply multiply each pixel value by 1.5 to "stretch" it out to the full grey range.

Noise Reduction:

One of the main problems in any digital image capture system is noise. Generally this noise is the result of electronic interference or spurious signal that is produced by the detection system or subsequent amplification of signal. This is the same kind of noise that is realized on an inexpensive stereo tuner when it is played at full volume as noise generally becomes a greater problem as one turns up the amplification of any electronic signal. In an SEM this noise can result from increasing the voltage on a photomultiplier tube (PMT) or the subsequent signal amplifier. As one is always trying to maximize the signal to noise (S/N) ratio it would be nice if there were some method of removing any noise that was introduced to the image by way of the signal detecting system.

There are several ways by which a digital image can be processed to remove some of the noise. The first is a "filtering" approach whereby we apply a mathematical algorithm to the digitized data set and remove any spurious pixels. A spurious pixel is defined as a pixel whose value exceeds, by some predetermined value, the value of any of it's immediate neighbors. Thus if we look at the following data matrix for a 3 X 3 cluster of pixels the computer can easily determine if the value of the central pixel is "appropriate".

127 130 129

126 248 131

128 131 133

Recognizing the "248" value as being inappropriate and therefore of likely spurious origin it could take the average of the surrounding 8 pixels (1036/8 = 129.5) and assign a rounded value of 130 creating a new pixel matrix of:

127 130 129

126 130 131

128 131 133

Thus eliminating the stray pixel. It must be remembered however that this is a new data set and the original image will be lost. It is possible that the original 248 value was correct and one must be careful in applying this type of smoothing or noise reduction system.

A second approach to noise reduction is known as image averaging. In image averaging the same image is collected multiple times and the values of each pixel are averaged to create a new value:

Example: Collect the same 3 X 3 matrix and display the averaged image.

125 179 142 127 133 140 126 156 141

130 133 128 130 137 126 130 135 127

134 136 138 130 134 139 132 135 139

Image 1 Image 2 Average of 1 & 2

Notice that the spurious value, 179, is recognized and reduced regardless of where it happens to lie in the matrix. If one collects the image a third time and averages it against the average of #1 and #2 the image is further refined.

126 130 139 126 156 141 126 143 140

132 133 129 130 135 127 131 134 128

132 137 137 132 135 139 132 136 138

Image 3 Average 1 & 2 New average image

You can see that after multiple passes the image will be "cleaned" up with each subsequent collection and averaging of the image. What is more is that even if a spurious signal occurs in one of the later image collections (there is an equal probability with each collection) the more sophisticated image averaging algorithms will account for this and minimize the impact of a spurious signal.

In the SEM we try to minimize the S/N ratio by collecting our final image in a slower scan speed but sometimes this can degrade the quality of the image, especially if it is charging badly. Although image averaging is usually employed with multiple fast image scanning (e.g. TV rate) is can sometimes be used in image acquisition on the SEM.

Edge Enhancement:

Another example of

Thus image resolution is dependent not only the number of pixels per unit area but also the different brightness intensities that can be represented by each pixel. The different intensity levels are represented as a binary data string or bit. In the simplest model each pixel is either black (0) or white (1). This would then be a 1 bit (21) representation. A two bit image (22) could represent each pixel as 00, 01, 10, or 11. Thus four shades of grey are possible. Thus one require 8 bits (28 = 256) of data per pixel to represent it as one of the 256 possible grey values. Many of today's sophisticated image processing computers and video games deal with color images (of which the human eye can distinguish thousands of different hues). For that reason it is not uncommon for these computers to have 16 (65,536 colors) or even 24 bit (16,777,216 colors) capability per pixel.

Output Devices:

[EDIT HERE]

*****************************************************************

Printers for Digital Images

Dr. Alan D. Brooker JEOL (UK) Ltd., Welwyn Garden City, UK

Inkjet printers

Inkprinters are now very affordable, and can offer very good printing resolution. A quantity of ink is ejected as a droplet (by thermal and/or electrostatic means) and fired at the paper to form a dot. In principle any form of paper can be used, but results are clearest if special low-absorbency paper is used. If transparency film is used a short drying period must be allowed or the ink will smear. Virtually all inkjet printers can print in color or monochrome - though they print much faster in monochrome. For inkjet printers the grey scale of color range is limited, and the resolution is relatively low (but improving rapidly), the overall image density can be low, and images take a long time to print (up to 20 minutes), but eh printers are very cheap to buy and run.

Thermal printers

There are two types of printer to consider - thermal wax (relatively cheap), and dye-sublimation (expensive). Thermal wax printers work on a dot-matrix principle (halftoning) to produce grays and colors - a heated element transfers dye from a carrier onto the paper (or whatever) in dots. The process is relatively slow (more than 5 minutes) but produces excellent density even though non-primary colors are produced by dithering. The resolutions currently available are comparable to inkjet printers.

Dye sublimation printers represent stat of the art as far as photo-realistic images are concerned.

A single heating element sublimes dye from a carrier film onto a specially-prepared paper into which the ink diffuses. The amount of dye transferred from the film depends on the heat applied to the element - this therefore determines the grey or color-level. The diffusion process results in continuous tones (like conventional photographs) on the paper. Dye-sublimation printers are expensive to buy and run, but can generate very high quality color or monochrome images in reasonable times - less than 5 minutes. Even though the resolutions may not sound impressive, it is important to remember that for dye-sublimation printers dpi equals pixels per inch.

Laserprinters

A laser is focused onto a drum which behaves such that where the laser impinges becomes charged. The charged drum then picks up magnetic toner particles which are subsequently deposited onto paper, and sealed by heated rollers. In principle the magnetic susceptibility of the drum and the focus of the laser determine resolution, but in practice the toner coarseness and delivery are more important. The vast majority of laserprinters are monochrome, although color laserprinters are now becoming available.

Laserprinters are cheap to buy, and cheap to run, the latest models boast 600-1200 dpi. The images produced by such printers are not photographic quality, but are easily recognized and show good grey scale reproduction. It is well worth paying the extra for laserprinter paper.

Conclusion

While the above is by no means an exhaustive summary of the current marketplace, it is hoped that some of the more pertinent areas have been highlighted. So what is the most suitable printer to buy? My personal prejudice is as follows:

For low-resolution, reduced color (and grey scale) images (e.g. X-ray maps, SPM images, Auger maps, etc.), destined for lab notebook copies, giveaways, or internal reports - an inkjet printer is a good compromise.

For grey scale images from anything destined for lab notebook copies, giveaways, and internal reports - a Laserprinter would suit most users.

For grey scale or color images, destined for top-copies of reports, publication, or exhibition - a dye-sublimation printer will give the required photo-realistic quality.

**************************************************************

A properly processed digitized image is still of little value unless one can share it with others. For this reason the final output device is of critical importance. One obvious way of sharing a digitized image is to send interested parties the actual image data set. Provided that the receiver has the appropriate computer hardware and software they too can view the same image. This is not as wild as it seems. Sales of Nintendo cartridges and computers attest to the lengths people are willing to go to exchange images. The use of data transmission over telephone and computer dedicated wire systems (Bitnet, Internet, etc.) will make the distribution of image files more and more common in the future. Already journals such as Cell Motility and Cytoskeleton accept manuscripts (including micrographs) on disk and in video format. When one views a micrograph in a high quality scientific journal today one is not seeing the same image that the author of the paper did. First, the author recorded the image on photographic film. Ideally this was using a camera on the microscope and represents the best possible primary image. Next, the author makes a high quality photographic print using optical image processing techniques. Some researchers are better at this than are others. Next, the publisher of the journal takes a paste up of the figures and photographs the whole plate on a large format internegative. Finally, this internegative is used in the printing of the figures onto the pages of the journal. This represents an image that is four generations removed from the original image. If the digitized data set were distributed all interested parties could look at a first generation image.

For the time being however photographic prints and other forms of hard copy will be a necessary part of image processing. Some of the options available today are:

Film Chain: This is essentially a high resolution CRT that is dedicated to image capture. A photographic camera of some sort (Polaroid, 35 mm, large format sheet film, etc.) is permanently attached to the CRT. The camera may or may not contain lens which focuses the image from the CRT onto the film. It is important that the resolution of the CRT be high enough to take maximal advantage of the digitized image. The photographic system on the SEM is a film chain as is the small image capture device on the confocal microscope. To capture high resolution color images one can use a high resolution black and white CRT in the film chain and break down the image into its red, green, and blue components (RGB). A three color filter wheel then rotates in front of the CRT while the color film is being exposed and the composite image then results in high resolution micrograph that has good color balance.

Thermal Printers: Thermal printers use a special paper and thermal transfers to produce an image onto paper. The printer takes the incoming video or digital signal and by heating the paper from behind transfers a tiny dot of black (or color) onto the paper which corresponds to a pixel. One of the things one looks for then in a video printer is the number of dots per inch (DPI). The greater this number the more points of information per unit area, the greater the resolution. Most of todays B&W thermal printers have a rating of approximately 300 DPI. Color printers use colored transfers of yellow, magenta, and cyan. The number of different colors available depends on the combination of these. A 2 X 2 matrix can produce nearly 1000 different colors whereas a 4 X 4 matrix can produce 4,096 colors. The more colors however the bigger the dot matrix required and the lower the image resolution. A good color printer may have a DPI rating of only 186. They range in price from $4000 to $22,000.

Plain Paper Printer:

Today's laser printers can achieve surprising quality and are increasing used as output devices for black and white graphics. Because they can produce much smaller dots than can thermal printers there are now on the market plain paper laser printers that can produce continuous tone B&W images with a DPI of 1200! At 1200 DPI this equals one dot every 0.2117 mm. This is near to the resolution of the unaided human eye. A second reason for choosing one of these printers is the fact that plain paper is significantly cheaper to use than is specialty thermal paper. Its ability to withstand long term archival is also superior to thermal papers which last only a few years under ideal conditions. They are not necessarily cheaper however with good laser printers starting at about $14,000. They also are incapable of producing color images.

Image Analysis:

In order to perform good image processing on a digitized image something has to be known about the composition of the image. This quantification of the data stored in an image falls under the general title of Image Analysis. One of the most useful tools in image analysis is the image histogram. A histogram is basically a graphic representation of the data contained in the image data file. A simple example of an image histogram would be a plot of how many pixels fall into a certain grey level categories. We could represent three different images with the following histograms.

{FIGS 4-1 to 4-3}

Using this information we can often subdivide portions of the image that have similar brightness intensities together. Since similar objects or objects of similar composition will have nearly the same grey levels when viewed under identical conditions we can make use of this numerical information to gather quantitative data about the sample.

Example: Identical strains of bacteria are grown on two different test media. When viewed in the microscope it is apparent the cells grow better on medium A than on medium B. The researcher would however like to quantify this so she collects 30 random images of each preparation all taken under the same conditions (magnification, staining, light intensity, etc.). Using the histograms generated for each image she identifies a subset of grey levels that go into making up the images of the bacteria (e.g. from 175 to 225). She now goes back and uses a subroutine to calculate the percentage of pixels from each image that fall within this range and ignores all others. Using this information she learns that 17% of the area has these intensity values in sample A whereas 42.5% of the area in sample B falls within these boundaries. Thus growth of bacteria on medium B is 2.5 times that of medium A.

Another use of image histograms would be to define the grey levels that correspond to the edges of the structure of interest. Using sophisticated sub-routines one could then define the boundaries and fill in that portion of the image that was contained by the boundaries. One could then recalculate a new histogram for the processed image and produce quantitative data about the sample regardless of the values of the original grey levels. Other sophisticated software can analyze the image and recognize shapes defined by the user. This can be useful in cases of pattern detection that might be difficult to see otherwise or in separating out objects of interest from objects that have a similar grey level intensity. These are just some of the ways that information about the brightness intensity of each pixel can be used to analyze the image.

Image Processing:

In addition to the simple image processing mentioned before (contrast stretch, regional highlighting, etc.) many other image manipulations are possible using digital image processing. Some of these involve changing pixel position and would include such things as image rotation, image inversion, digital magnification, etc. Another way in which pixel location can be used in processing involves the merging or combining of two or more separate images. This can be useful in reconstructing an image that was previously sub-sampled (e.g. serial sections) or two views from different collections (e.g. double labeling, 3-D projections, etc.).

Differences in brightness intensities can also be used in a number of different ways. Subtle shifts in brightness can be accentuated to bring out detail of boundaries. This type of edge enhancement can be very useful in clearly showing slight changes. Likewise stray electronic noise or spurious pixels can be removed by doing a next nearest neighbor algorithm or by collecting multiple copies of the same image and averaging each new image against the previous ones. One could also produce an negative image by flipping all of the brightness intensities around a middle value of 128.

Example: An image has significant noise introduced by the electronics of image capture system. These appear as single white pixels randomly distributed throughout the image. If the operator uses a sub-routine that checks each pixel's intensity against its neighbor's and if the difference between them is greater than some value (say 50 grey levels) it will change the brightness value of that pixel to a grey value that is an average of its nearest neighbors and thus remove the spots. A second way would be to collect the same image several times (6-10) and only save those pixels brightness values that remain nearly the same in all the separate images. This too will help in eliminating electronic noise from the image.

In addition to being used to remove electronic noise from the image one can perform image processing that will increase or accentuate the differences between adjacent pixels to "enhance" the boundary between the two pixels. This is often referred to as "Edge Enhancement." It can be calculated using the Laplacian operation for a 3X3 pixel matrix:

a b c

d e f

g h i

and the equation L = [e - (a+b+c+d+f+g+h+i)/8]

e will be replaced by L only if the the L value is greater than the critical value or "threshold" otherwise the new value for e will be ignored.

A final way in which brightness intensities can be processed is by assigning various colors to the image based on the grey level intensity. Often this results in a loss of resolution (it takes more pixels to make a color than a grey value) but can have benefits. One of the benefits is to make the micrograph pretty enough that it will be published on the cover of Nature or in one of the popular journals. A more useful application is to accentuate certain structures in an image so that attention can be drawn to objects of interest without drastically affecting the remainder of the image. Another useful application would be once again for use in merged images when one wants to still be able to distinguish between the two images (double labeled, 3-D projections, etc.)

Image Storage: One of the problems with digital image analysis is the tremendous amount of computer memory storage that is required. In a normal image processor 8 bits (= 1 Byte) are required for each pixel. Since there are

262,144 pixels in a 512 X 512 image this means that 262,144 Bytes are required to store one image. Most display monitors are not squares but rectangles and an image format of 740 X 512 (378,880 Bytes) is more typical. A computer with a 10 MByte hard drive could hold only 26 of these images before its storage capacity was exceeded. The data file that contains the raw image is essentially the microscopists negative and it must be preserved. For this reason mass storage capacity and some form of data compression is an essential part of image processing. Although computers now can be routinely outfitted with large capacity hard drives (1 gigaByte = 1000 MBytes or more) even these will reach full capacity in a relatively short time. For this reason other high capacity storage media are used. Some of these removable hard disk drives (Winchester, or Bernoulli boxes), optical disk drives (WORM = Write Once Read Many; Re- writeable), tape drives (1/4" cassettes), etc. Each of these has their own advantages and drawbacks and decisions are usually made on the basis of such considerations as cost, convenience, accessibility, and capacity.

One type of image compression uses a technique known as run-length coding. A simple example would be to scan a single line of an image. There may be many pixels in a single line that have the same grey value (e.g. in a good fluorescence image a large portion of the pixels may be black). Rather than code this string of pixels each as a separate 8 bit point we could code the whole string with just two 8 bit numbers, one to represent the grey level and the second to represent how many in a row have that grey level. Essentially any string longer than 3 pixels in a row would produce some savings and long strings could be substantial. Of course if the pixel intensity changes between every pixel this would double our storage as we would dedicate two 8 bit numbers per pixel instead of just one.

Image Processing

A second type of compression uses Differential Pulse Code Modulation or DCPM. This algorithm assumes that although the pixel intensity levels will be changing as we move across the line the changes between adjacent pixels will not be great. Thus rather than code the absolute grey level using 8 bits per pixel we can record only the change that occurred between one pixel and its neighbor. If this change is small (e.g. 8 grey levels or less) than we only need 3 bits to record it not 8. If we apply this to the whole image we can achieve a savings of nearly 63% (= {8-3}/8 = 5/8). Even if we allow for a greater change in pixel intensity (e.g. 32 grey levels vs. 8) DPCM could save us nearly 38%.

Digital Images as Negatives:

One final advantage of digital image files over photographic film is the fact that they can be replicated with perfect fidelity many, many times. Even in the best scientific journal or publication the image that the reader sees is at a minimum a fourth generation image (1= original negative, 2= original print for plate, 3= publishers plate negative, 4= publisher's printed page). As the standardization (or flexibility) of computer hardware and software becomes more universal and more and more researchers become linked by way of their computers and computer networks the rapid dissemination of image files will become routine. Even today it is not uncommon for researchers and product engineers to swap image files either by wire or through the exchange of floppy disks (which can hold up to five or more compressed images depending on the format). In the future readers and authors will be able to independently examine the same image and the reader may even be able to perform their own processing and analysis to either confirm or refute the author's conclusions. Even if this does not immediately occur, the first logical step would be to distribute the original image files to the outside reviewers for their evaluation and even perform their own processing if warranted. The fact that electronic backups of valuable image data files means that even if a catastrophe occurs the data can remain safely stored away somewhere else. The same can never be done with photographic negatives. Digital image processing is fast replacing optical and analog image processing and will soon become the primary means by which microscopists share images.

In addition to storing images as easily copied first generation images these data files can rapidly be distributed to interested researchers around the world via data transmission over telephone and computer networks, and copies on disks and tapes and physically distributed. In the future most publications will be distributed in such electronic media virtually putting many publishers out of business or at least changing the way they do business. Libraries will become central electronic media processing centers where researchers will access data bases and journals remotely via their office computers.

Digital Confocal Microscopy:

A new approach to using digital image processing involves collecting a series of conventional images from a microscope that are separated in Z space in much the same way as are the images on a scanning confocal microscope. These images are captured using a CCD (Charge coupled device) chip which in conjunction with a digitizing card can store the image in digital format. By applying a complex set of algorithms known as deconvolution. In very simple terms what deconvolution or "digital Confocal" does is compare each pixel in Image Processing

each image with the same pixel in the planes above and beneath it. Based on changes in gray level intensity the program either retains or rejects the pixel from the image plane being examined. In this way only those pixels that were much brighter than they were in the planes above and beneath and were therefore collected "in focus" are retained and the image is restored as a cleaned up, in focus image with all of the out of focus noise removed. This cleaned stack of digital images can then be manipulated for three dimensional projections and volume renderings in the exact same ways scanning confocal images are.

Scanning Probe Microscopes

Typically resolution in an optical system is limited by the wavelength of the illumination source. Due to the properties of diffraction one can only image objects that are greater than 1/2 wavelength of illumination. However, if one passes the light through an aperture that is markedly smaller than the wavelength of the illumination then based on the light transmitted or reflected by the sample one can detect (i.e. image) objects smaller than 1/2 the wavelength. A new type of light microscope, the scanning near-field microscope takes advantage of this property by passing light through a pinhole and bouncing it off a very flat object that lies just beneath the opening. By moving either the specimen or the aperture in a raster pattern and recording the amount of signal that is produced an image of the object can be produced.

[diagram]

This has been taken to the ultimate extreme in the case of scanning probe microscopes (SPM). In a SPM the aperture is replaced by an extremely fine probe or tip. Often this is a crystal of tungsten that has been electroetched down to a very fine tip, in some cases only an atom or two across. In the case of a scanning tunneling microscope (STM) the tip is brought very, very close to the surface of the sample and a small voltage is applied to it. Electrons from the specimen then move or "tunnel" across this gap and create a small current. If the tip moves even a tiny distance closer or further away from the atoms this tunneling current changes dramatically. The STM works by establishing a constant tunneling current and then moving the tip across the surface of the specimen in a raster pattern while keeping the current constant. The only way to do this is to move the tip up and down relative to the specimen and thus keep the distance between the tip and the specimen constant. This up and down movement of the tip is then recorded by a computer and the X,Y, & Z coordinates can be graphically displayed as a topographic map or image of the specimen surface.

[diagram]

The precise X, Y, and Z movements of the probe are controlled by piezoelectric controls which are devices that can move a very small and precise amount depending on the amount of current that is passed through it. Today's piezoelectric devices are sensitive enough to record changes at the atomic level and thus a STM can create topographic images at the atomic level. One problem associated with an STM is the fact that the sample must be relatively flat and also conductive (otherwise tunneling will not occur). As this is not terribly useful for most biological specimens a second different type of SPM has been developed. The Atomic Force Microscope (AIM) uses the same type of basic tip movement and position recording system as does a STM (e.g. X & Y piezoelectric controller, computer position recorder and topographic display, etc.). It differs primarily in the type of tip or probe that is used. In an AIM the tip is mounted on a spring and is literally dragged across the surface of the specimen in much the same way as is a stylus on a record. As the tip interacts with the atoms in the surface it is repelled by atomic forces (hence the name) and is deflected up or down. These up and down movements are recorded by measuring either the tunneling effect change between the top and bottom of the spring or by optical deflection of a laser light bouncing off of the tip.

[diagram]

Sums allow us to use scanning technology to image objects at the atomic level. Depending on the type of detector tip used wet and or non- conductive biological specimens can be examined. New probe designs (e.g. ion probes, etc.) are allowing us to use this basic technology to create three dimensional maps of a wide variety of specimens at the atomic level without being limited by the boundaries of standard light and lens based optics.

Low Voltage SEM

Lateral resolution in the SEM is dependent on four limiting factors:

a) the size of the primary beam probe at the specimen surface.

b) the amplitude (stability of the energy spread) of the primary beam current (which affects the signal to noise ratio).

c) the penetration depth of the primary beam and the size of the region of signal production.

d) the effect of charging on the specimen.

Each of these is affected by different variables which we can discuss. While any SEM can be run in a low (5 KeV or less) mode, only a field emission gun (FEG) SEM has a probe size and stability that truly allows us to take full advantage of low voltage imaging.

Size of Primary Beam Probe:

The final size of the beam probe striking the specimen is dependent on a number of factors. First, the size of the region from which the primary beam electrons are generated is of critical importance. On a bent tungsten wire filament this region is approximately 106 , for a LaB6 emitter it is 105 , and for a Field Emission source is 102 . Thus the electron source for a FEG is 4 orders of magnitude smaller than for that of a standard tungsten emitter. All of the lenses that are in the column of the SEM and further focus this spot (condenser, final lens) and so with very similar lenses the ultimate size of the probe hitting the specimen will always be much smaller in an FEG SEM.

Amplitude of Beam:

Changes in amplitude of the beam, or energy spread, can be very detrimental. Not only do such changes manifest themselves in increased chromatic aberration in each of the condensing lenses of the column but they can also result in differential signal production (since this is dependent on how many primary beam electrons strike the specimen). Once again an FEG-SEM has a significant advantage over conventional SEMs in that they typically have an energy spread of 0.2-0.3 eV whereas tungsten and Lab6 emitters range from 1-4 eV. This may not sound like a lot when one considers accelerating voltages of 15 to 20 KeV but it is an order of magnitude difference which when amplified by chromatic aberration can become significant. The issue of beam stability becomes even more important when one considers low KeV beams of less than 1 KeV.

Depth Penetration of Beam:

Just as the size of the region of primary excitation is proportional to the size of the beam probe it is also dependent on the depth to which the primary beam penetrates into the specimen. The lower the KeV the better but often in order to efficiently collect most of the electrons being produced by the emitter one must use an anode/cathode difference of 10 KeV or more. One way around this is to decelerate the primary beam electrons before they reach the specimen. This can be done either up in the gun assembly or closer to the specimen. Most SEMs can only do this in the region of the anode/cathode and thus trade away a lot of primary beam electrons and at the same time introduce the potential for chromatic aberration.

The effects of increasing beam penetration can be seen on thin, low atomic weight specimens. In the example below the cuticular hairs of an insect are easily penetrated by the beam at relatively low KeV (10 KeV) and signal is produced from a greater volume of the specimen resulting in dramatically decreased resolution of the surface of the specimen.

Charging:

Charging effects can be minimized by coating the specimen or by reducing the total number of electrons needed to generate a useful signal. Because a FEG SEM crams so many electrons into such a small probe one can generate a comparable signal without having to oversaturate the specimen with electrons. This coupled with the reduced energy of the beam results in less specimen damage and reduced charging.

In-Lens Detector:

The ability to image a specimen in the SEM is often limited not so much by the specimen or the signal it produces but the ability of the detector to collect this signal. This becomes a critical issue at very short working distances (5 nm or less) which are necessary for very high resolution work. A secondary electron detector positioned to the side of the specimen is sometimes blocked from receiving signal by the specimen and stage itself. This is similar to the situation with a specimen that has a deep cavity from which signal cannot escape despite the fact that it is producing a significant amount of signal.

One attempt to overcome this limitation in signal collection is to place a secondary electron detector within the final lens of the SEM. In this way the detector is on nearly the same optical axis as the primary beam itself thus the position of the detector relative to the source of the signal is not the limiting factor in signal detection. Because the secondary electron detector does not need to be positioned between the specimen and the final lens very short working distances can be used and very high resolution obtained. The secondary electrons of the signal can be distinguished from the electrons of the primary beam by both their significantly lower energy and their directional vector (i.e. opposite in direction to those of the primary beam. The secondary electrons produced by the specimen do not interfere with the primary beam electrons, the situation being analogous to shooting a water pistol into the air during a driving rainstorm. The chances of water droplets in from the water pistol actually hitting the individual raindrops is vanishingly small despite the greater numbers and significantly higher energy of the rainstorm.

Like the electrons of the primary beam, the secondary signal electrons are focused by the electromagnetic field of the final lens and concentrated into a smaller area. A converging lens works the same way regardless of the direction from which the electrons enter the lens. Thus the final lens acts somewhat like a signal collector, concentrating the secondary electrons before detection by the in-lens detector.