Brief History of TEM

[text 4-12]

The primary reason for using a transmission electron microscope as opposed to a light microscope is the vast increase in resolving capabilities of the TEM. As we will learn later this is due to the wavelength of illumination that is produced by an energized beam of electrons. It was known in the 1920's that electrons traveling as an energized beam behave in a wave-like fashion similar to light waves. Taking advantage of this two German scientists, Ernst Ruska and Max Knoll, developed the first working electron microscope in 1932. In 1986 they were awarded the Nobel Prize in Physics for this outstanding accomplishment.

Initially it was felt that biological specimens could not be examined with this unique instrument. This was because the extreme conditions inside the TEM (high vacuum, intense heat generated by the beam of electrons, depth to which electrons can penetrate a specimen, etc.) were thought to be incompatible with wet, thick, biological specimens. It was Ernst Ruska's brother Helmut (a medical student) who was most encouraging in pursuing the development of the TEM for the study of biological specimens. Some of the earliest TEMs were of the wing of a house fly (a reasonably dry and electron transparent specimen), diatom frustules, and bacteria. By 1941 the first electron micrographs of viruses were being produced and funding for further development of the TEM came from medical researchers not physicists.

In fact it was not until the 1950's, when advances in specimen preparation improved, that the electron microscope became widely used by biologists. Some of these advances which we will talk about in greater detail include improved fixatives, embedding resins, and ultramicrotomes (the machines that can shave off electron transparent slices of material).

Since that time the TEM has played a major role in our ability to understand how cells are constructed and how they function. Our understanding is less than perfect and the major emphasis in cell biology research today is placed on understanding the biochemical functions of the cell (under this I would include genetics which is an attempt to understand the DNA of a cell). The heydays of purely descriptive TEM are past. Instead today's researcher is now discovering that many of the biochemical changes in a cell can only be fully understood by going back and looking at the specimen. In this and other ways the TEM is enjoying a renaissance as more and more biologists rediscover the power of this instrument. In the fields of materials science the TEM has always been an essential part of many research programs and continues to play an important role.

Introduction to TEM

The main use of the transmission electron microscope is to examine in submicroscopic detail the structure, composition, and properties of specimens in ways that cannot be examined using other equipment or techniques. While most of us associate this with the study of biological materials it must be pointed out that electron microscopy continues to have a significant impact on fields such as geology, chemistry, materials science, electronics, etc.

As part of this it is important to try and examine the sample in as near a natural state as is possible. In some cases this is relatively easy to do but often this involves performing elaborate preparative steps to the sample to make it suitable for examination in the TEM. In some cases a copy of the sample and not the sample itself is examined. The variations on these preparative techniques are virtually endless.

In general however a sample must meet certain criteria to be considered as useful for the TEM examination. These include:

1) Complete lack of water or other volatile components.

2) Ability to remain unchanged under high vacuum conditions.

3) Stability when exposed to electron beam damage (thermal and physical stress)

4) Regions of both electron opacity and electron transparency.

5) Appropriate size for the TEM.

We will cover a variety of these preparative techniques for both biological and non-biological samples but because we must work quickly during a ten week quarter system we will start right in with preparation of biological samples because this is what we will start in lab with fixation of animal tissues tomorrow in lab.

Biological Specimen Preparation

[text 16-17]

Before they can be examined in the TEM nearly all biological samples must be extensively prepared and processed. In the movie "Swamp Thing" the "scientists" place a piece of the swamp thing directly into a TEM and watch it grow. This is not yet possible with modern TEMs. Instead the sample or some representation of the sample must be made to withstand electron bombardment and the high vacuum conditions found within the microscope. The most common method of doing this is to shave off very thin pieces or "sections" of the sample. The sections must be thin enough to allow the passage of electrons yet strong enough to hold the tissue together. The process whereby this is achieved is known as fixation, embedding, and sectioning and discussion of these will occupy a major portion of the first half of this course.

Goals of Specimen Preparation:

1) To observe the specimen in as near to the "natural" state as possible. The amount of time it takes to arrest or stop the cell is critical here for we wish to know what the cell was like when it was functioning normally, not how it reacts to the an artificial situation (e.g. fixation).

2) To preserve as many features of the specimen as possible. (Sometimes this is modified to remove certain components in order to determine their effect). Interpretations drawn from incomplete preservation can lead to drastically incorrect interpretations of the data. Conodonts found in geologic samples were thought to be entire organisms. Instead it turns out that they represent the feeding structure of a type of mollusk.

3) To avoid the introduction of artifacts that could obscure or influence our interpretation of the specimen. In addition to changes that might occur during the fixation some features that might have been initially preserved may be extracted during further processing. One tries to keep this to a minimum.

4) To render the specimen stable for examination in the TEM. Most biological specimens are structurally weak, hydrated, and electron translucent. All of these features are the exact opposite of what is required of a TEM sample.

Problems Encountered:

1) Most specimens are metabolically active, not static systems. Biological systems change over time, not just during sudden changes in environmental conditions (e.g. fixation) but also as a normal part of their metabolism.

2) Most specimens have a significant aqueous component that is not compatible with electron microscopy.

3) Most specimens are fragile and lack the ability to be examined directly in the TEM.

4) Most specimens lack sufficient contrast to be adequately visualized in the TEM.

5) Most specimens are either too large or too thick to be examined directly in the TEM.

The basic procedure for preparation of biological samples for the TEM is as follows:

1) Fixation: This arrests cellular processes as rapidly as possible and cross-links cellular structures to preserve their "normal" morphology and chemical composition. A number of factors are influential in reducing the number of distortions or "artifacts" that are induced during the process of fixation. Some of these include sudden changes in pH and/or temperature, osmotic potential causing osmotic shock, physical or mechanical damage, and permiabilzation of cellular membranes. Each of these will be discussed later.

2) Dehydration: This process involves the exchange of water in the sample with an organic solvent. Although this step often induces undesirable changes in the tissue (e.g. cell shrinkage and extraction of components) it is a necessary part of the process. Ultimately the sample must be embedded in a hard supporting medium. These media are most often resin polymers that are not miscible with water. For this reason the removal of all cellular water is essential.

3) Infiltration: The sample must ultimately be placed within a hard plastic support medium in order for sections or sufficient thickness to be cut off. Initially these media are introduced in liquid form so that every portion of the sample will eventually be occupied by support material. When we look at a transmission electron micrograph it is important to remember that much of what we see is only the remains of the actual sample impregnated with resin. The process whereby the liquid plastic replaces the organic solvent is known as infiltration. Depending on the viscosity of the resin this process can last from a few to many hours.

4) Embeddment: Once the sample is completely infiltrated by pure resin it is placed in a mold and polymerized. The mold is usually chosen to accommodate the particular type of sample (flat embeddment, pellet of cells, etc.) and so that the finished block will easily fit into the microtome.

5) Sectioning: Ultra-thin sections are cut from the polymerized block by using an ultramicrotome and a glass or diamond knife. This is perhaps the most difficult skill in electron microscopy to master. It is essential that the infiltration of the resin and its subsequent polymerization be carried out completely otherwise sectioning will be a near impossibility.

6) Staining: Biological specimens are usually composed of elements that have relatively low atomic weights and in this way do not differ significantly from the embedding resins in which they are contained. To add contrast to the specimen elements of high atomic weight are used to selectively stain the biological material and impart contrast when compared to the embedding resin. Atoms of high atomic weight are better able to stop or deflect the beam of electrons whereas elements of low weight allow them to pass relatively unimpeded.

All of these steps sound rather straight forward but it is always useful to point out at the beginning of a course in TEM that there are about a thousand different places that something can go wrong. Do not get discouraged too soon. All of these things take practice and a good bit of luck as well.

Fixation Text: 16-28

With some exceptions which will be discussed later, all biological samples must be fixed in some way before they can be examined in the TEM. The most common way of doing is this is through the use of chemical fixatives that cross link the various components that make up the sample. Before we talk about the various chemicals used for fixation we need to discuss some basic factors that affect fixation. These include, pH, total ionic strength, osmolarity, temperature, length of fixation, and method of application of the fixative.

One of the major criteria for any good fixation is that it be uniform throughout the specimen. This is dependent upon the type and size of the specimen. Even in homogeneous animal tissue such as liver good fixation is limited to the outer 2-3 cell layers (100 um) of the specimen surface in tissues which are fixed by immersion into chemicals. For this reason a very small initial specimen is critical to good ultrastructural preservation. The ideal tissue size is a piece in which no dimension is greater that 0.5 mm and when it comes time to section the material the cell layer to be sectioned must be chosen carefully. Because mechanical injury and or extraction of constituents will most affect the outermost cells of a tissue these may be carefully trimmed away. Well fixed cells beneath this surface layer are the ones to then use. The central cells, especially from large chunks of tissue, will undoubtedly be the most poorly fixed and infiltrated and are therefore the worst ones to select for sectioning.

[Fig. 1.2 Hayat]

Another factor that affects overall fixation quality is osmolarity. The terms osmolarity and osmolality are often confused. Osmolarity refers to the response of cells immersed in a solution. Osmolality indicates the molarity (moles per liter of solution) or molality (moles per kilogram of a solvent) that an ideal solution must possess in order to exert the same osmotic pressure as the test solution. Because electrolyte solutions dissociate into ions they exert a greater osmotic pressure than their molarity might indicate. An isotonic solution exerts an osmotic pressure that is equal to that exerted by the cell cytoplasm. An isotonic solution will cause the cells to neither shrink nor swell. Thus the final osmolality of a fixative solution has a direct effect on the final appearance of the cells. Differences in fixative osmolality have different effects depending on the sensitivity of the tissue to changes in osmotic pressure.

Two other terms used in discussing osmolality are hypotonic and hypertonic. A hypotonic solution is one in which the osmolality of is less than (e.g. fewer moles of dissolved compounds per liter) than the test solution (e.g. cell cytoplasm). A hypertonic solution is one in which the osmolality is greater than that of the test solution.

The usual vehicle of the chemical fixative consists of a buffer. The molarity of the buffer is one of the primary factors that influences the overall osmolarity of the total fixative. Despite this contribution of the buffer towards osmolality it should be kept in mind that it is not the sole constituent. Other substances such as salts, organics, and even the fixative itself contribute to net osmolality. Since most buffers when used at their physiological pH range are hypotonic it is often useful to add other substances.

The relative contributions of some standard fixative buffers are as follows:

[Table 1.2 Hayat]

Thus the difference between 0.1 M cacodylate (220 mosm) and 0.2 M cacodylate buffer (350 mosm) is significant.

The osmolality of a fixative solution can be adjusted by several methods. Either electrolytes or non-electrolytes can be added to the fixative solution. The most commonly used non-electrolytes include sucrose, glucose, dextran, and polyvinyl pyrvolidine (PVP). Osmolarity adjusting electrolytes include salts such as NaCl or CaCl2 but these should be added before adjusting the final pH or the buffer as they might affect it. These salts should be carefully chosen. There may be times when Mg+ ions are preferred to Ca+ etc. The fixative solution should be adjusted to match as closely as possible the osmolarity of the tissue/cells to be fixed. Although the optimum osmolality is often found through trial and error this problem is not a trivial one. In complex tissues such as kidney or leaf there are several different types of cells present, each with their own ideal osmolarity. These ideals change as cells mature. A plant meristematic cell might have an ideal osmolarity of 400 mosm whereas mature vascular tissue may have an ideal of 800 mosm. Most mammalian tissue have an ideal in the range of 500 to 700 mosm. Marine organisms may have a total osmolarity of 1000 mosm or more.

Another important feature of the fixative vehicle is the pH at which the fixative solution operates. Proteins are positively and negatively charged based on the number of acidic and basic groups. At their isoelectric pH the proteins are electrically neutral. This is why it is critical to keep the fixative solution as close to the isoelectric pH of the cell as possible. The pH of most animal cell cytoplasm is between 7.0 and 7.4 and this is where one would try to balance the final pH of the fixative.

There are a number of different types of buffers that are used as fixative vehicles. They each have certain properties which make them ideal for different purposes. Although they are typically used alone or in conjunction with electrolyte additives, some can be mixed for specific applications.

Cacodylate Buffer:

Effective in the 6.4-7.4 pH range. It lacks extraneous phosphates which may interfere with cytochemical studies. It causes changes in membrane permeability and may cause a redistribution of cellular materials along osmotic gradients. Calcium can be added to it without precipitation. Sodium cacodylate contains arsenic and for this reason is a health hazard. Gloves and a fume hood should be used and care taken even when fixative is not present (e.g. during rinse stages).

Collidine Buffer:

Useful in 6.0-8.0 pH range with high efficiency at pH 7.4. Does not react with OsO4. It is toxic and has a strong odor. It is especially stable over long time and is therefore useful for extended storage of fixed materials but can extract proteins and is therefore not recommended for routine electron microscopy.

HEPES Buffer:

HEPES is a tertiary amine heterocyclic buffer and may interfere with tissue amine-aldehyde reactions. It is most useful in the pH 7.3 range. It is compatible with divalent cations and does not bind them.

Phosphate Buffers (Sorenson):

These are very close to those found in living systems and are therefore more physiological than other buffers. They are relatively non- toxic and stabilize the pH better than most other buffers and do so over a wide range of temperatures. Phosphate buffers may cause some swelling and do bind polyvalent cations causing them to precipitate.

PIPES Buffer:

In most respects PIPES is very similar to HEPES in its buffering and reactivity actions.

Tris Buffer:

Tris has poor buffering capacity below pH 7.5, reacts with glutaraldehyde, and is a biological inhibitor. It should be avoided for most EM applications.

Veronal Acetate Buffer:

Useful at low (4.2-5.2) pH and not at 7.2-7.5 pH. It reacts with aldehydes has a short shelf life and should only be used in limited applications.

Here are some fixative solutions and their relative osmolarities:

[Table 2.1 from JJP]


One class of fixatives used for electron microscopy is the aldehydes. Formaldehyde with its small size and highly reactive aldehyde group might at first glance appear as an excellent fixative for electron microscopy.

[diagram formaldehyde]

Because speed of penetration is one important factor in good fixation the small size of formaldehyde molecules allows them to readily pass through membranes and react with cellular components. Unfortunately formaldehyde is less than satisfactory in preserving ultrastructural details and it's use an fixative in electron microscopy is of limited value. This is in contrast its chemical cousin, glutaraldehyde.

[diagram of glutaraldehyde]

Glutaraldehyde began to be readily used by electron microscopists in the early 1960's. With it's two aldehyde groups glutaraldehyde can cross link compounds and act as a molecular bridge between macromolecules. Glutaraldehyde reacts so well that glutaraldehyde is often avoided or reduced to minimal concentration (0.1% vs. 2.0%) in those studies in which immunocytochemistry is to be performed. In these cases formaldehyde is often used in conjunction with glutaraldehyde. This has the effect of preserving protein antigenicity while sacrificing some ultrastructural preservation.

Glutaraldehyde reacts well with proteins by aldehyde reaction with alpha-amino groups of amino acids present in proteins.

Although capable of reacting with a number of amino acids it appears that lysine is the most important component of protein involved in the reaction with glutaraldehyde. Chemical studies indicate that pyridine derivatives are the major reaction product of amine-glutaraldehyde reactions. It is thought that these pyridine polymers provide cross-links that bridge randomly spaced primary amino groups in cells.

With the exception of some phospholipids that contain primary amines (e.g. phosphotidylserine and phosphotidylethanolamine) most lipids do not react well with glutaraldehyde. These unfixed lipids can readily be extracted during either the dehydration stage or even during the fixation stage. Although it is unknown whether glutaraldehyde reacts with the amino groups of cystidine and guanine it has been shown to preserve at least some DNA within the nucleus. It is thought that rather than being a direct fixative of the nucleic acids themselves, glutaraldehyde cross-links DNA associated proteins. Such a cross-linking between histone H1 and the core histone proteins may "trap" the DNA molecule between the two and reduce further extraction. The reaction of glutaraldehyde with carbohydrates is not well studied. It has been shown that 40-65% of total glycogen is retained in glutaraldehyde fixed tissues.

Two factors that should be considered when using glutaraldehyde as a fixative are the temperature of the fixative and the concentration of glutaraldehyde. Although the reaction between glutaraldehyde and cellular reactive groups is enhanced at elevated temperatures this also has the disadvantage of extracting cellular constituents due to autolysis and for this reason low temperatures are preferred. This also reduces the shrinkage of mitochondria and other artifacts. Fixation for 2 hr. at 0-4 degree C. is generally preferred for routine fixation of plant and animal tissues. As far as total concentration of fixative is concerned 1.5-4.0% is usually recommended for routine fixations. It must be kept in mind that changes in the concentration of the glutaraldehyde will result in changes in the overall osmolarity of the total fixative solution. For this reason the selection of fixative concentration, buffer concentration, and additional compounds must be balanced to yield a final osmolarity that is desirable for the particular tissue.

Some of the drawbacks to using glutaraldehyde as a fixative include the fact that it does not penetrate as readily as some other fixatives. Because of its larger size glutaraldehyde does not pass through membranes as readily as does other smaller aldehydes such as formaldehyde and acrolein. None of the aldehydes impart any electron opacity to the sample (they have no heavy metal atoms associated with them). It often causes membrane systems to vesiculate and is incapable of rendering most lipids insoluble in dehydrating solutions. It is in general a poor fixative for the preservation of membranes. Glutaraldehyde is also a powerful inhibitor of enzymatic activity and should be avoided or minimized for immunocytochemical studies. Some of these problems can be minimized by using osmium tetroxide as a secondary fixative. This will be discussed later.


One other aldehyde, acrolein is also used as a fixative for electron microscopy. Acrolein is a three carbon monoaldehyde that has the structure:

[p. 40 Hayat]

Acrolein penetrates tissue much faster than does glutaraldehyde (1.0 mm/hr. in rat liver vs. 0.4 mm/hr.) and even formaldehyde it has its major use in fixing tissues in which penetration is a problem. This includes specimens that are large, dense, or covered with impermeable substances (e.g. waxes or chitin). It is a highly reactive chemical and will self polymerize on exposure to light, air, or certain other chemicals. In addition to being highly toxic it is also highly flammable. Its highly acrid odor serves as a warning to its presence but it should always be used in a fume hood and with gloves.

Osmium Tetroxide:

Osmium tetroxide or "osmic acid" was one of the first fixatives used in electron microscopy. It is a non-polar tetrahedral molecule that has the following structure:


Each of these linked oxygens is a potential reaction site. Osmium tetroxide can be dissolved in water, acetone, carbon tetrachloride and other solvents. It can be used as a liquid fixative or one can fix specimens upon exposure to osmium vapors. It has a relatively slow rate of penetration and for this reason is usually used as a secondary post fixative rather than as a primary fixative. In addition to acting as an excellent fixative the electron dense osmium atoms also serve as an electron stain thus imparting contrast to the specimen. Osmium tetroxide reacts poorly with proteins and carbohydrates and is most useful in its ability to fix lipids. Of these unsaturated fatty acids are more reactive than saturated fatty acids. It is thought that osmium tetroxide specifically reacts with olefinic double bonds. For this reason post osmium fixing is essential if one is to preserve membranes and lipid containing bodies. Osmium tetroxide is also useful for the stabilization of certain proteins and can serve to cross-link some proteins and unsaturated lipids. It has also been shown to react with the ribose group of nucleic acids and can to a certain extent be used to fix DNA. Osmium can also react with nucleoproteins associated with the DNA and help to stain the nucleoli.

If used alone osmium tetroxide can induce some gross swelling of the tissue. This effect is somewhat minimized by tissue shrinkage caused by subsequent dehydration and infiltration. The swelling effect can be minimized by the addition of electrolyte or non-electrolyte additives. Osmium is typically used at a concentration of 1-2% in buffer. It should be avoided when cytochemical studies are to be performed as it almost totally destroys antigenicity of reactive sites. Like other fixatives osmium tetroxide is highly toxic and should be handled with gloves in a fume hood. It is very volatile and it will be fixing your tissues by the time you detect it by smell. Fixative waste should be disposed of properly and never put down the sink. Although osmium tetroxide is typically used as a secondary fixative it is often used in conjunction with glutaraldehyde in what is sometimes referred to as a glut/osmium cocktail. This mixture is particularly useful for fixing single cells. Because aldehydes can reduce osmium it is recommended that the fixation be carried out at 4 degree C for 30 min or less. The mixture should be made immediately prior to use. A similar chemical ruthenium tetroxide is sometimes used as a less expensive alternative to osmium tetroxide.


Potassium permanganate (KMnO4) was first used as a fixative for electron microscopy by Luft in 1956. Although permanganate extensively extracts most cellular substances (proteins, nucleic acids, etc.) they are exceptionally good at preserving membranes. Permanganates penetrate tissue faster than do other commonly used fixatives. They have also been shown to preserve some carbohydrates that are extracted by glutaraldehyde.


Diimidoesters are a class of compounds that carries an amido group (NH2+) adjacent to each functional group. They have the general formula:


Diimidoesters can cross link proteins by reacting the carbon adjacent to their amido group with an alpha amino group of the protein. One possible problem with using diimidoesters is the fact that they cross-link best at high pH (9.0-9.5). Comparisons between glutaraldehyde and diimidoester show little difference in cellular structure. The primary advantage that they have over glutaraldehyde is the fact that even after extensive cross- linking proteins seem to retain their antigenic and enzymatic reactivities. For this reason they may be more useful than fixatives such as formaldehyde in preserving both ultrastructure and cytochemistry.

Other Fixatives: A few other fixatives that have been used in electron microscopy include Uranyl Acetate, Potassium Ferricyanide, Tannic acid, and Picric acid.

Methods of Fixation:

There are basically four major modes of chemical fixation: 1) vascular perfusion; 2) immersion; 3) dripping on the surface of the tissue; 4) injection into the tissue.

Essentially perfusion involves the injection of the fixative into the vascular system of the animal following a replacement of the blood with a saline solution. This allows the circulatory to deliver the fixative to organs by way of blood vessels and capillaries. It provides for a rapid and uniform delivery of fixative and begins the fixation before the arrest of the circulatory system. It also reduces the trauma induced to the tissue often associated with sudden death and excision.

The immersion method of fixation is by far the most common method used. Changes that are induced in the tissue are often the result of physical and chemical stress associated with the removal of the tissue. In order for fixation to be successful the tissue must be cut into small pieces. This mechanical disruption of the tissue can often result in a loss of ultrastructural preservation. Obviously there are many organisms and tissues for which vascular perfusion is impractical or impossible. In these cases immersion is the only real alternative.

Dripping and direct injection are two other methods of fixation designed to fix the tissue while it is still in as close to the natural condition as possible. The fixative is dripped or injected into the organ while it is attached to the freshly killed animal.

Alternative Fixations:

[Freeze sub and other rapid freezing techniques will be discussed later by JJP]


Following chemical fixation the fixative must be removed from the tissue by way of rinsing. This is usually done be removing the fixative solution and replacing it with a pure buffer of the same concentration and pH. Two to three changes of buffer over a period of 10-20 min. is usually sufficient for removing most of the fixative. Additionally it is useful to rinse out the buffer with distilled water. This helps to eliminate the possibility that residual salts might affect polymerization. The water is removed by a bringing the sample through a graded series of either ethanol or acetone. Since the shrinkage problems that often accompany dehydration are more pronounced with sudden changes in solvent concentration it is preferable to have a number of short exposures to gradually increasing concentrations of solvent.

Generally one of two procedures are followed prior to infiltration and embeddment with epoxy resins. The first involves bringing the sample through a graded ethanol series up to to 100% followed by two changes in 100% propylene oxide. Propylene oxide is used for several reasons. First, most epoxy resins are more soluble in PO than they are in pure ethanol. Second, PO contains a free epoxy radical and thus will not separate from the epoxy resin even if small amounts are left following infiltration. On the negative side is the fact that PO is extremely good at extracting lipids from cells, perhaps even those previously fixed by osmium tetroxide. PO is quite reactive and make interfere with cellular components rendering them non-functional for cytochemical studies. Also the epoxy group of PO can react with the epoxy groups of the resin and inhibit polymerization. This can adversely affect the hardness and cutting properties of the block.

An alternative dehydration schedule uses acetone from 5-100%. There is some evidence that acetone causes less specimen shrinkage and lipid extraction than does ethanol. It is also non-reactive with osmium tetroxide and will not interfere with epoxy resin polymerization. Phospholipids are particularly immune to extraction by acetone. Acetone is completely miscible with most epoxy resins and is not know to radically alter protein antigenicity.

The most significant problems caused by dehydration are those of shrinkage and extraction of cell constituents. For these reasons dehydration times should be kept to a minimum. As with fixation this process is aided by having small pieces of material in which the time it takes for diffusion to occur is reduced. In some cases the dehydration and infiltration schedule can be shortened to a few hours however these procedures should not be applied in standard fixation protocols. Some resins can tolerate a small amount of water and one can thus begin the infiltration process before the dehydration is complete.

Related to dehydration is the problem of infiltration. Like dehydration infiltration involves the replacement of the solvent with the unpolymerised resin. Because this is a diffusion dependent event several factors can influence both dehydration and infiltration. One of course is size of the sample. Another is the use of a slow rotator to keep the sample moving and always coming in contact with fresh fluid. Placing the sample under vacuum can also aid in this process.

Embedding Media

The sole purpose of embedding media for electron microscopy is to enable the object of interest to be cut sufficiently thin for the microscope to develop its full resolution. As such the embedding medium does not contribute to the stating of the object nor to the resolving power of the microscope. The best embedding medium permits thin sectioning with the least damage during the preparation and gives the least interference during microscopy. In short it supports and holds together the tissue while remaining non-reactive with the electron beam. By non-reactive we generally mean non-volatile when struck with the beam and non-interfering with the passage of electrons. Early embedding media used by biologists included Gelatin, Celloidin (nitrocellulose), and Paraffin but the 100 nm or smaller sections that are required for electron microscopy were not possible to cut using any of these media.

A good embedding medium for electron microscopy should have the following properties:

1) Medium should be formed by conversion of monomer to a polymer. The monomer should be of low atomic weight, low viscosity, and polymerize smoothly near room temperature with a minimum of shrinkage.

2) The final polymer should be transparent to passage of electrons.

3) The polymer should be mechanically stable to radiation.

4) The density of the polymer and resin should be low. This will provide greatest contrast for the tissue.

5) The resin should bind tightly to the tissue so that sectioning will not tear tissue from the resin.

6) The resin should not chemically alter the tissue. This includes excessive extraction of components as well as alterations that will affect histochemistry.

7) The resin should cut well and be hydrophilic enough to allow lubrication by water against the knife.

The first embedding media to be successfully used in electron microscopy were the methacrylates. Methacrylates or "acrylic resins" are monomers that have the generalized structure:

They polymerize by a cross linking of the free radical group of each monomer:

Embedding Media (cont'd)

Methacrylates have many advantages over other embedding media. These include rapid penetration of tissues resulting from their low viscosity and rapid diffusion of low molecular weight monomers. The resulting plastic is strong enough to be cut very thin. They are relatively nontoxic and inexpensive and because they can polymerize at room temperature they do not require any special handing or equipment. The degree of final block hardness can be predictably controlled by mixing methyl (hard) and n-butyl (soft) methacrylates. They give good contrast of the embedded material. One recently introduced methacrylate resin is called "Unicryl." The makers of Unicryl claim that it can be used for both light and electron microscopy. Its advantage over other methacrylates stems from the fact that the resin shears slightly ahead of the advancing knife edge creating more of a fractured surface than a cut one. The microtopography that results from the fracturing process exposes many more biomolecules for cytochemical localization experiments such as immunolabeling.

Some of the disadvantages of methacrylates include the fact that the specimen must be fully dehydrated in an organic solvent before infiltration can be carried out. A second disadvantage is the fact that methacrylate plastics are slightly unstable under the electron beam and can become depolymerized by heat. The most serious flaw and the one that has resulted in methacrylates being used infrequently for ultrastructural studies is the fact that the monomers can polymerize unevenly in the block. This is a result of the fact that the polymerization rate of methacrylates is dependent on the viscosity of the mixture. The fine texture of the fixed tissue is apparently sufficient to restrict the motion of the growing polymer chains. This results in the medium polymerizing in the tissue before it does in the surrounding pure plastic and the tissue can become greatly swollen (up to 8 times original size). Although a variety of modifications to the polymerization process have been tried (different accelerators, inhibitors, changes in temperature and light, etc.) this problem has not been completely overcome and remains one of the main drawbacks to using methacrylates.

Polyester Resins:

Like methacrylates polyester resins cure by way of free radical mechanism similar to the methacrylates. Often styrenes or even methacrylates themselves are used as cross linking bridges between the free radical groups of the polyesters. Because they are highly nonpolar polyester resins are not readily miscible with ethanol and for this reason acetone is usually chosen as the dehydrating agent. One of the more commonly used polyester resins is Vestopal. Although more resistant to electron beam damage than are methacrylates, polyester resins are not the most stable embedding resins in this regard.

Epoxy Resins:

Perhaps the most widely used class of embedding media is the group known as epoxy resins. They are characterized by the presence of an epoxy group (3 member ring of 2 carbons and 1 oxygen) which upon rupture provides the energy required to drive polymer formation. Unlike polyesters and methacrylates which react by way of free radicals groups forming very specific bonds, the epoxy ring reacts with virtually any available hydrogen that can be pried loose as a proton from any source. For this reason epoxy resins not only establish cross links with the other resin molecules but also with the tissue itself and even the contained in which the tissue is being embedded. Epoxies usually require large quantities of a second nonepoxy molecule with which to condense, and this is referred to as the "hardener." Primarily these hardeners are a group of dicarboxylic acids known as anhydrides. One of these is dodecnyl succinic anhydride or DDSA which continues to be used in most epoxy resin recipes. Another common one is nadic methyl anhydride (NMA). Sometimes these two are added in different concentrations to control the relative hardness of the polymerized blocks. In addition to the epoxy resin and the hardeners an accelerator may also be added to self catalyze the reaction. It is important to note that when mixing the components of an epoxy resin that the epoxy and hardeners be thoroughly mixed before the accelerator is added. Failure to do this may result in unevenly polymerized blocks and tissues. Some of the more common epoxy resin mixtures include Spurr's resin, Araldite, and Epon 812.

Polyester and epoxy resins offer several advantages over methacrylates. Perhaps the most significant advantage is that the problem of tissue swelling upon polymerization is largely avoided. One disadvantage is that many polyester and epoxy resins are significantly more viscous than the methacrylates and thus their ability to penetrate the tissue is reduced. This disadvantage can be partially overcome by extending infiltration time, keeping the size of tissue to be infiltrated small, and/or using techniques such as vacuum infiltration. A final method of overcoming the viscosity factor is to use a clearing agent. A clearing agent is in effect a very low viscosity resin that replaces the dehydrating agent and thus makes infiltration of the final resin easier and more complete. For polyester resins that use styrene as the cross linking agent, pure styrene is used. For epoxy resins propylene oxide (epoxy propane) is used between the last dehydration step and the beginning of infiltration. One other disadvantage of using highly viscous media is the fact that the various ingredients do not mix well and great care must be taken to insure that the components are thoroughly combined before proceeding with infiltration and embeddment.

With respect to electron beam stability, the epoxy resins are by far superior to both methacrylates and polyester resins in their ability to resist damage from the electron radiation. Thus epoxy resins do not lose a significant amount of their mass when exposed to the beam. Both epoxy and polyester resins seem to polymerize fairly evenly thus avoiding the one of the major disadvantage over methacrylates. Likewise shrinkage during polymerization is reduced when using these resins, especially with epoxies.

One other factor that should be considered in using embedding media is the relative toxicity of the compounds. In general the epoxy resins are the most toxic and carcinogenic of all the embedding media and for this reason should be handled with the utmost care. The use of gloves and a well ventilated fume hood are essential to the proper handling of embedding media. Low viscosity epoxies such as Spurr's resin are made low viscosity by placing two reactive epoxy groups per molecule. This has the effect of making them twice as hazardous as comparably sized mono-epoxy compounds and suggests that they only be used when other embedding media are unsuitable. It is also important to remember that propylene oxide is an epoxy resin and is one of the more hazardous compounds you are likely to encounter in EM. Methacrylates and styrene have a moderate toxicity rating and for this reason should also be handled carefully.

Alternative Embedding Media:

One of the drawbacks to most plastic resins is that they require the tissue to be thoroughly dehydrated with an organic solvent before infiltration can begin. This presents the problem of extracting various substances, especially lipids from the tissue. It is thought by some that certain resins which are polar and therefore able to have some water solubility would eliminate or reduce this problem of lipid extraction. One problem with this is that all of the known resins are themselves organic solvents which contribute to the problem of lipid extraction. On the other hand some water-soluble resins such as Durcupan or Aquon are useful in ultrastructural studies. By virtue of the fact that they are polar and therefore highly hydrophilic, sections cut from such resin are easily penetrated by water based compounds (e.g. antibodies, enzyme treatments, etc.). In those cases where this is advantageous water-soluble can be of great use.

Another embedding medium that has been used is a 10% solution of pure poly styrene dissolved in acetone. As the acetone evaporates the tissue is covered with more styrene/acetone until all of the acetone has evaporated and the sample is embedded in pure styrene. Although sections that are cut from such blocks are a little more difficult to cut than from epoxy blocks, the sections can be post treated in a mild acetone solution which will expose portions of the tissue and make them more accessible to enzyme or antibody treatment.

Two embedding resins that have in recent years gained in popularity are the acrylic resins LR White and Lowicryl. The major advantage of these resins is that they are both hydrophilic and are very permeable to immunoreagents. This eliminates the need for pre-etching of the sections prior to treatment with antibodies. LR White and a comparable product LR Gold are acrylic resins that are at least partially (up to 10%) miscible with water. Because of this the infiltration of the resin can begin without prolonged dehydration to 100% ethanol. Still it should be remembered that the resin itself can extract substances from the tissue. Both LR White and Lowicryl have extremely low viscosities. For this reason infiltration schedules can be greatly shortened when compared to epoxy resins thus reducing the extraction effect of the resin. Although LR White can be polymerized by heating to 50 degrees for 24 hours, it can also be cross-linked by exposure to U.V light or a chemical accelerator. One problem involved with LR White is that the presence of oxygen greatly interferes with the polymerization process and must be scrupulously avoided. This is done be either carrying out the polymerization in an embedding mold that is impermeable to oxygen (such as gelatin capsules, not BEEM capsules) or placing the samples in an alternative atmospheric environment (such as a nitrogen chamber or ZipLock bag).

Another group of methacrylate resins marketed under the name of Lowicryl were designed to infiltrate and polymerize tissues at very low temperatures. This approach has the advantage of reducing polymerization artifacts when tissues are "lightly" fixed. This is especially important in immunocytochemical studies in which the percentage of glutaraldehyde is kept to a minimum and osmium tetroxide is avoided completely. It has also recently become useful in protocols where rapid freezing and freeze substitution are used to preserve the tissue. Some Lowicryls can remain as liquids with very low viscosity down to - 80 degrees. Polymerization of Lowicryl at these reduced temperatures is carried out by exposure to U.V. light. This must be done carefully since the reaction is exothermic and if the U.V. source is too close to the sample uneven heating can occur. This tends to cause uneven polymerization and defeats the purpose of carrying out the process in the cold. LR Gold can also be used to embed unfixed material at infiltration temperatures of -25 degrees and is a less expensive alternative to Lowicryls.

Once polymerized LR White and Lowicryl can be sectioned as normal blocks although often the knife must be kept on the dry side to prevent water droplet formation on the hydrophilic block face. As with other methacrylates the LR resins and Lowicryl are relatively non-toxic easy to use. There drawbacks are the same as those for other methacrylates but their usefulness in cytochemical studies ensures their growing use in years to come.

Polyethylene Glycols:

Polyethylene Glycols (PEG) are a class of embedding media that are easily extracted by common solvents. This means that following sectioning the resin can be removed and resinless sections be examined in the TEM. This offers the advantage of increased contrast (there is no additional resin to scatter the electrons) and provides greater access to cellular constituents for immunocytochemical studies. Tissue is fixed normally and dehydrated up to 100% ethanol. The PEG is melted at a temperature of 60 degrees and mixed 1:1 with ethanol. Infiltration continues in 100% 60 degree PEG. The tissue is transferred to warmed gelatin capsules filled with warm PEG. After the tissue sinks the capsules are cooled in either LN2 or a -20 C freezer. Sections are cut with a dry knife and placed on grids. Grids are placed in ethanol which dissolves out the PEG and the grids are then critical point dried and examined or further processed.

Support Films

One question that always comes to mind is how are the final sections of thin epoxy or polyester resins actually placed into the TEM and examined? The answer is through the use of thin metal support grids. These come in a great variety of sizes, shapes, and materials and will be discussed in greater detail in the lectures dealing with sectioning. The most common types are known as mesh grids. These are essentially small screens made from thin copper. Mesh sizes are available in sizes ranging from 50 to 1000. The mesh number refers to the number of grid openings per linear inch. The smaller the grid size (e.g. 50) the larger the hole size and the greater the ratio of open area to covered area. Sections that are picked up on grids of 300 to 400 mesh size are supported by at least several small strips of metal. Such sections are generally strong enough to withstand examination within the TEM without further support. For routine examination of sections no additional support film is required or indeed desired.

There are however times when uncoated or "naked" grids are not desir- able. These include negative staining preparations in which a suspension of cells or particles is placed directly on a grid, when fragile sections such as those cut from Lowicryl or other methacrylates, and when it is critical that large open areas of the section not be obscured by grid bars (e.g. low magnification work, diffuse number of cells in the section, serial reconstruction, etc.). In addition to these there are other times when a coated grid, even a 300-400 mesh grid might be used (e.g. fragile replicas, sections that will receive further treatment that might loosen them from naked grids. For these applications it is essential that the grid be coated with some sort of support film on which to place the specimen.

The two major criteria that a support film must meet are that it be mechanically strong enough to actually support the specimen when exposed to the electron beam and that it be as electron transparent as possible. In addition to these, features such as absence of irregularities, high signal to noise ratio when compared to the specimen, and ease of preparation should also be considered. To date only a few materials have proved satisfactory in meeting these criteria. These include carbon films, graphite oxide, and various plastics. All of these can be made very thin and are of relatively low atomic weight so as to not interfere with the electron beam.

Of the plastics used as support films Collodion (nitrocellulose) and Formvar are the most common. Others include butvar, Pioloform, and polystyrene. Collodion and Formvar are both somewhat hydrophilic and this can be either advantageous or disadvantageous depending upon the application (picking up section +; negative staining + or -). Formvar is a reaction product of polyvinyl alcohol with formaldehyde. It is more stable than collodion and slightly less hydrophilic. All plastic films are subject to decomposition by the electron beam. They are also prone to further cross-linking by the beam which adds to strength but increased brittleness and shrinkage. One major problem with this cross-linking under the beam is that the specimen may drift for quite a long time until the film reaches an equilibrium at a given illumination. Plastic support films can be strengthened and stabilized by depositing a fine layer of carbon over them. This has the advantage of making the film slightly stronger and more resistant to drift but has the disadvantage of making the film hydrophobic and more brittle.

Plastic support films are usually made by floating off a fine layer of the film onto the surface of clean water. These films can either be directly floated onto the water or first deposited on a glass slide and then floated off. Glass slide films are usually superior having greater strength and fewer holes. The grids are then carefully laid down on the floating film taking care to always place the same side of the grid (either shiny or dull) down. The film is then picked up off the water by either using a piece of wax, parafilm, or clean glass slide to sandwich the grids between the film and the support. The coated grids are then allowed to dry and stored in a dry dust-free environment. Most plastic films are made in the same manner.

[fig. 6.1 Hayat]

The other type of most commonly used support film is made from an ultrathin layer of carbon. Carbon films are made by depositing a layer of carbon on a substrate such as glass, mica, or plastic film. This is done by evaporating a carbon filament under vacuum and depositing the carbon atoms as a very thin (2-5 nm) layer. This in contrast to plastic films which often have a thickness of about 30 nm. The carbon film is floated off in a manner similar to that for plastic films and the grids are brought up from underneath small pieces of carbon. Alternatively the grids can be mounted on a fine mesh screen and the water level lowered until the carbon film is gently brought down onto the grids. Carbon films are generally much stronger and more stable than plastic films and despite the fact that they are generally more difficult to work with there are a number applications in which a carbon film must be used. Although the hydrophobic nature of carbon films can make them undesirable, there are several techniques that can be used to partially overcome this (glow discharge, storage in refrigerator, exposure to ethanol vapors, etc.).

Some special uses of support films include the formation of perforated or "holey" grids. These are usually made from plastic film suspensions to which water or glycerin has been added. These form microspheres which when the plastic is dry leave thousands of tiny holes in the support film. While this is usually something to be avoided holey grids are very useful for checking the focusing of the TEM (we will do this in lab) and for special application purposes such as examining isolated membranes. In this way the support film itself acts like a mesh grid with the thousands of very tiny open viewing areas.

Embedding Media

The sole purpose of embedding media for electron microscopy is to enable the object of interest to be cut sufficiently thin for the microscope to develop its full resolution. As such the embedding medium does not contribute to the staining of the object nor to the resolving power of the microscope. The best embedding medium permits thin sectioning with the least damage during preparation and gives the least interference during microscopy. In short it supports and holds together the tissue while remaining non-reactive with the electron beam. By non-reactive we generally mean non-volatile when struck with the beam and non-interfering with the passage of electrons. Early embedding media sued by biologists included Gelatin, Celloidin (nitrocellulose), and Paraffin but the 100 nm or smaller sections that are required for electron microscopy were not possible to cut using any of these media.


All of the previously mentioned steps lead up to the final preparatory procedure, sectioning and staining. Of all the techniques in electron microscopy sectioning is undoubtably the most difficult to master. The goal of sectioning is to obtain pieces of the embedded tissue that are thin enough for most of the electrons in the beam to penetrate but with enough material that an image can be discerned. Like support films, sections should be strong and stable under the beam and contribute a minimal amount to electron scattering. Conventional microtomes used for preparing light microscopic slides are inadequate for cutting sections of the required thickness for TEM. For this reason an ultramicrotome is employed.

[picture of microtome]

The way in which a microtome works is that the polymerized block with embedded tissue is mounted onto a mechanical arm. The microtome brings this arm down in a controlled manner so the that tip of the block just contacts a stationary knife. Because very thin sections are required metal knife blades are not nearly sharp enough to cut ultrathin sections. For this reason sharpened diamonds or freshly cleaved glass edges are used as knives in ultrasectioning.

[fig. 3.31 Hayat]

After passing the edge of the knife the microtome arm retracts slightly so as not to strike the knife edge on the return stroke. The entire arm then moves forward a very small amount (70-130 nm) and repeats the process. It is the precise control of this forward motion of the arm that causes ultramicrotomes to be as sensitive and expensive as they are. Basically there are two ways of achieving this. One is to have the entire arm assembly mounted on a very fine thread screw. This screw is turned a small amount each stroke and advances the arm forward. The microtomes you will be using operate in this fashion. The second approach is known as a thermal advance. The entire arm is made of a uniform metal (often aluminum) and surrounded by heating wires. As the arm is warmed the metal in the arm expands and thus moves the block forward. By controlling the amount of current going through the wire the rate of expansion can be controlled and coordinated with the time required for each pass of the knife. As one would expect any changes in ambient temperature (including cool drafts or even warm breath) will affect the expansion of the cutting arm and thus affect section thickness. For this

reason thermal microtomes tend to be touchier to use. None-the-less they are capable of providing good thin sections.

A number of factors will affect sectioning. They can be grouped into two major categories; 1) Sample problems and 2) Equipment problems.

Sample Problems:

The most obvious sample problem and the one that is essentially impossible to repair is that of improper resin polymerization. This can be the result of a number of factors including, incomplete dehydration and/or infiltration, uneven mixing of resin, improper polymerization conditions, too large a block of tissue, etc. In some rare cases the problem can be corrected by additional curing of the blocks but usually it means that the sample must be scrapped and the project begun again.

The other type of sample problem usually involves the size and shape of the block face. The major cause of bad sections is excessive pressure along the block face/knife edge interface. The easiest and best way to avoid this is to have a very small block face. This reduces the energy required on the downstroke to cut off a section. Unless there is a specific reason for it the largest dimension of any block face should not exceed 0.5 mm. One easy way to check this is to use a 1 X 2 mm slot grid as a guide. The entire block face should easily fit within the 1 mm opening. Whenever possible the entire block face should be filled with the specimen. This will produce a section of uniform density. The shape of the block face is usually a trapezoid with the longest parallel edge being at the bottom of the block.

[diagram face]

The reason for this shape is two-fold. One when the top and bottom edges of the block are parallel the resulting ribbon of sections will come off in a straight line. During sectioning the trapezoidal shape is better supported than any other. As the section proceeds the force required gradually decreases. The trapezoid should not be too skinny however or the section will fold during the process of pushing against previously cut sections in the boat. A third benefit can be had if the trapezoid is cut asymmetrically. This allows one to orient a particular portion of the section regardless of how it is oriented in the TEM. There are many problems that can be encountered during sectioning and a surprising number of them can be attributed to size and shape of the block face. A good rule of thumb for beginners is to cut your trapezoid as small as possible and then cut it in half. One should also try to make the from surface perpendicular to the block axis (i.e. parallel to knife edge). This will reduce the number of sections that need to be cut before the entire block face is coming off.

Equipment Problems:

The other major source of sectioning problems stem from equipment problems. This should not and does not translate into "something is wrong with the ultramicrotome." While it is possible that there is something wrong with the ultramicrotome it is much more likely that there is something wrong with the way you are sectioning. The most common mistake is for something to be loose. Be sure to check all fittings which could be loose. Some of the other problems that might cause bad sectioning include improper knife angle, too much or too little water in the knife boat, wrong cutting speed, and most commonly a dull or scratched knife edge. Since the knife is so critical we will discuss it next.


The cutting precision of ultramicrotomy requires a cutting edge of incredible sharpness. For this reason a metal blade, no matter how finely ground could ever have a sharp enough edge for ultramicrotomy. The only knives that are suitable for ultrathin sectioning are made from either freshly cleaved glass or highly sharpened precious stones (diamonds or sapphires). In order to reduce cutting pressures all of these knives need to be lubricated with water during the cutting process. A water trough or boat is thus an integral part of any ultramicrotome knife. The boat is either a permanent structure into which the polished gem is glued or a plastic or metallic tape structure which is glued to the broken glass knife. The boat is attached to the glass using either dental wax, nail polish, or rubber cement. This type of knife boat is prone to leakage and one of the most frustrating things is when a beautiful ribbon of sections is lost due to a leaky boat. The major cause of this is incomplete drying of the glue and for this reason I strongly recommend that you make your glass knives and boats a full 24 hours before you use them. If stored in a dust free environment glass knives are usable for up to a week. Once mounted on the microtome it is important that the water level within the boat be maintained at the proper level. There should be enough water to completely lubricate the cutting edge but not so much that an inverted meniscus is formed. The proper way of doing this is to add or withdraw water at the same time that the light source is being positioned. A white sheen will appear when the proper level is achieved. The water level and light should be positioned so that the minimal amount of water is required

to get a full sheen in the region of the knife in which the cutting will take place. The water level will change with time (evaporation, water removed with sections, etc.) and it is important that this be checked continually.

The cutting edge of a glass knife should be critically evaluated before one attempts to section with it. Any nicks, spurs, or scratches that are visible with the naked eye or under the dissecting microscope should be cause for rejecting the knife but only if they occur in the cutting region of the knife edge. Every glass knife breaks along what is known as a fracture plane. The geometry of this plane can be estimated by the position of the fracture ridge. The straighter the knife edge and the more parallel the fracture ridge is to this edge the better the cutting region of the knife.

[Fig 3.9 Hayat]

In general the closer the fracture ridge is to the knife edge the better is the cutting property of the knife. For this reason one usually begins in the central portion of the knife and gradually works their way over towards the left. For diamond knives the cutting edge should be of uniform sharpness across its entire edge. Still many researchers mentally reserve one portion of the cutting edge for trimming and thick sectioning and another for thin sectioning. Do not however be overly critical about your glass knives. There is no such thing as the perfect knife. Most glass knives have some sort of imperfection. The trick is learning to spot the imperfections and avoid that part of the knife edge.

Sectioning Imperfections:

There are three major artifacts that commonly occur when sectioning. These are knife marks or scratches, compression, and chatter. Proper use of the equipment can help to avoid all three of these.

Knife marks are caused by either a dull or dirty knife edge. Although a glass knife edge is very sharp it is not particularly hard or durable. Repeated contact with the relatively soft embedding resin is enough to introduce microscratches into the edge of the knife. These scratches produce knife marks on the sample which characteristically run along perpendicular to the edge of the knife. Knife marks can range from tiny thread-like lines that are barely noticeable to gaping fissures and/or holes. The only way to prevent knife marks is to cut thin sections from a virgin region of the knife edge. The cutting of a single very thick section can introduce scratches and make that portion of the knife unusable. For this reason great care should be taken when approaching the block and no more than twenty thin sections be cut from any one portion of the knife edge. When this has occurred the knife should be retracted, shifted towards the right and the block reapproached for another set of sections. If done properly a single glass knife can yield 50 to 100 very fine sections.

Compression is usually the result of cutting sections that are too thick or having a dull knife edge. Compression occurs due to the stress that is placed on the embedding resin during the sectioning process. It is recognized as when the section being shorter than the block face from which it was cut. It must be remembered that the block face is undergoing a great deal of stress during the sectioning process. First, there is the build up of pressure as the block moves down and contacts the knife. Because polymerized resin is amorphous and not crystalline there is no cleavage plane or path of least resistance on which stress can be relieved. Second, the thin section must turn ninety degrees in order to float on the surface of the boat. It is at the junction of this bending that most compression occurs.

[Fig. 3.31]

The type of embedding resin can affect compression with methacrylates generally showing more compression problems than epoxy resins. Another factor that affects compression is the angle of the knife edge with small angle knives (< 30 degrees) causing less compression than large angle edges. If compression is a significant problem it can be minimized by doing several things. One, the total clearance angle of the knife can be reduced to a minimum. Two, the cutting speed can be reduced. Compression seems to be more of a problem at higher cutting speeds. Three, a harder resin or resin mixture can be used as these seem to be less prone to compression. And four, sections should not be cut ultrathin. Sections that are less than 40 nm thick show more compression than those in the 50- 90 nm range. Compression can be largely relieved from sections while they are floating in the boat. The act of floating alone tends to relax the plastic and the reduce compressions effects. This relaxation of the plastic can also be achieved through the use of either heat or exposure to the vapors of an organic solvent. Solvents such as chloroform, toluene, and trichloroethylene. Both heat and solvent vapors soften the plastic and allow the surface tension of the water to spread it out. This in effect removes the compression from the section. The sections should not come in direct contact with the solvent or heating element as this would cause them to melt. This "spreading" of sections can be done at any time after the sections have been cut but must be performed before the sections are picked up onto grids.

The third major sectioning problem is that of chatter. It produces a repetitive array of dark areas that run parallel to the edge of the knife. These marks tend to blend into one another and resemble waves on the sea. The primary cause of chatter is vibration. This vibration may be from the components of the microtome itself. A slightly loose block and/or knife will cause chatter and for this reason it is especially important to tighten all components. The vibration may also be caused by the room or microtome motor. For this reason the microtome should be placed in a vibration-free room and on a very stable work surface that is cushioned or vibration-damped. Chatter is produced by the sudden and intermittent release of pressure as the knife cuts through the block. Since these pressures can differ in the tissue and in empty surrounding resin the entire block face should be occupied by tissue alone whenever possible. If chatter continues to be a problem changes in the cutting speed and thickness settings may help to minimize chatter, but still the most important factor is to insure that vibration is kept to a minimum.

Section Thickness:

There is no one ideal section thickness for thin sections. What may be too thin for one sample may be too thick for another. Electron opacity of the sample and the type of resin used are factors that must be taken into account as must the thickness of the support film. Relative section thickness can be described using the following terminology:

Thin 8-100 nm (0.1 um)

Semithin 0.1-2.5 um

Thick 2.5-10 um

The section thickness chose not only depends on the electron opacity of the sample but also on the operating conditions of the TEM. The higher the accelerating voltage the thicker can be the section. This is because electrons of greater energy are less affected interactions with atoms in the sample and

section. For this reason Epoxy sections up to 0.25 um thick can be used in a standard TEM operating at 100 Kv. For most purposes however sections in the range of 40-60 nm are used. When one uses thinner sections the resolution of the sample increases due to the reduced amount of electron scattering but at the same time contrast is sacrificed (there is less of the sample to stop electrons). For samples that are particularly electron opaque this may not present a problem but in many cases sections on the order of 40 nm may be too thin to be of practical use even after staining with heavy metals.

Section thickness can be estimated by the use of interference colors produced by the cut sections. Because the plastics such as epoxy, polyesters, methacrylates and Formvar have a refractive index that differs from that of water (1.5 vs. 1.0) when light is refracted through the plastic layer an interference color is produced. The approximate thickness of a plastic film can be estimated based on the particular color produced.

[Refractive chart]

These thicknesses are applicable to both sections and support films.

Picking up Sections:

Once flawless, ribbons of sections of the proper thickness are floating on the water in the boat they must be retrieved and placed onto grids. Basically there are two ways of doing this. One is to come up from beneath the sections with the grid and have the sections adhere to the grid surface as it is pulled at an angle from the water in the boat. Alternatively the sections can be picked up by coming down from above with the grid held parallel to the surface of the water. It is helpful if the edge of the grid has a slight bend in it to facilitate its being held flat.

[Fig. 3.29]

Once sections are firmly attached to the grid they can be further processed. One of the most common post-sectioning treatments is to stain the sections with a heavy metal salt solution.

Staining of Sections

In order to visualize a specimen in the TEM one must have contrasting regions of electron transparency and electron opacity. Just as in light microscopy differences in contrast can be accentuated through the use of a stain. To be of use in a TEM a stain must have the ability to stop or strongly deflect the electrons of the electron beam so that they do not contribute to the final image. The most commonly used stains in electron microscopy are made up of heavy metal salts. These have atoms of high atomic weight which are especially good at deflecting electrons. Electron staining falls into one of two categories 1) positive staining in which contrast is imparted to the specimen itself and 2) negative staining in which the area surrounding the specimen is given increased electron opacity while the specimen itself remains more translucent. We will discuss positive staining here. Negative staining will be covered later in the course.

Positive Staining: We have already discussed one type of positive stain, that being Osmium tetroxide. When OsO4 reacts with biomolecules in the specimen the Osmium atom serves as a bridge between the reacted sites. With an atomic weight of 190 it is of sufficient size to effectively deflect electrons. Because it reacts more readily with lipids than it does with proteins osmium tetroxide has the added of advantage of being somewhat structure specific positive stain.

The two most commonly used post-fixation positive stains are uranyl acetate (MW = 422) and lead citrate (MW = 1054) the two heavy metals being uranium and lead respectively. Both UA and lead citrate are heavy metal salt stains and are both categorized as general or non-specific stains. Because they are heavy metal salts they are quite toxic and should be handled and disposed of with great care. UA ions are believed to react with phosphate and amino groups (found in nucleic acids and certain proteins) while lead ions are thought to bind to negatively charged molecules such as hydroxyl groups. Because of this ability to stain different cellular components UA and lead citrate are often used in conjunction with one another though not simultaneously for reasons we will see in a few minutes.

Positive stains may be applied either prior to embedding or after sectioning. When applied to the specimen before dehydration this type of staining is referred to as en bloc staining meaning "on the block." Because it is prone to forming image degrading precipitates lead citrate is not used as an en bloc stain. UA on the other hand is a very useful en bloc stain and is believed by some to actually act as a fixative in its ability to retain structural detail. When used as an en bloc stain UA is applied to the specimen as a 0.5% - 4.0% aqueous solution after the initial fixatives (glut and osmium) have been thoroughly rinsed from the specimen. After several hours in the stain the specimen is dehydrated and infiltrated as normally done. The dehydration step should not be long as UA is soluble in solvents and extended storage in a dehydrating agent will remove most of the UA. En bloc staining can greatly improve the contrast of membranous structures such as mitochondria, golgi, ER, as well as DNA and other fine filaments.

Post-embedding Staining: Sections that have been picked up and dried can be stained on their grids. Usually this is done by floating the grids on a drop of 1% - 4% UA for 15-30 minutes. The grids are then thoroughly rinsed, dried, and either stained with lead citrate or stored until they are examined in the TEM. Although grids can theoretically be stained any time after sectioning it is best to do so within 24 hours of having cut the sections. Grids that have been exposed to the energy of the electron beam will not absorb stain. Some resins are particularly difficult to penetrate and therefore do not stain well. In these cases one can try to either elevate the temperature of the stain or by staining in a methanolic UA solution. UA can be dissolved in 100% methanol and the grids placed into it. All of the steps are the same as for aqueous UA staining with the exception that the grids must be rehydrated through a graded methanol series before being rinsed in dH2O and finally dried. If this is not done the sections will wrinkle badly due to the temporary dissasociation of the sections from the support film.

Lead citrate is often used to stain grids after they have been stained in UA. Because Lead citrate is very sensitive to CO2 (it quickly reacts to form a precipitate that can ruin a section) every effort must be made to eliminate this gas from the staining procedure. For this reason very clean glassware, CO2-free water, and other precautions must be followed in preparing lead citrate for use. Sections are stained by floating the grids on drops of Lead citrate for 3-5 minutes at room temperature. The drops are placed in a CO2-free environment which can be made using a glass petry dish and sodium hydroxide pellets. The NaOH actively scavenges CO2 and after a few minutes the atmosphere inside the petry dish is essentially CO2 free. After staining the sections are rinsed in a 1M NaOH solution (to wash off the lead citrate) and then thoroughly rinsed in dH2O (to rinse off the NaOH). Grids are then blotted dry and stored until needed.

[Section on lens aberrations under SEM file]

Theory of Electron Optics Text 136-140

In order to begin discussing the basic theory of electron optics it is necessary to establish some terminology. The answer given by most people as being the most important reason for using an electron microscope is the ability to magnify images, but this is wrong. A photograph of a cigarette can be enlarged many thousands of times when it is placed on a highway billboard but this does not increase our knowledge of what the cigarette looks like. The most important reason for using an electron microscope is the advantage it offers in terms of increased resolving power over other imaging tools such as a regular light microscope. A $25 Sears microscope and a $40,000 Zeiss research microscope can both magnify things the same amount. What you pay for is the increased resolution capabilities in the Zeiss instrument.

In order to use illumination to magnify the image of something one must be able to deflect the illumination from its path. In a light microscope this is accomplished through the use of glass lenses. As the light travels into the lens it is bent due to the fact that it is now traveling through a medium that has a different refractive index. This refers then to the phenomenon of refraction. The properties of a glass lens are determined by its shape and index of refraction (which in turn is dependent on the optical density of the glass from which the lens is made). The situation in an electron microscope is analogous to this in that the illumination beam is deflected and that the angle of this deflection is determined by the shape of the lens. It differs however by virtue of the fact that their is no change in the refractive index of the medium (the vacuum in the lens is the same as the vacuum in the column hence there is no difference in optical density) and so refraction is not the means by which an electron beam is deflected.

Resolution is defined as the ability to distinguish two separate items from one another. It can be quantified as the smallest distance between two objects in which the objects still appear to be separate.

Resolution is dependent on three factors:

1) Diffraction

2) Refraction

3) Dispersion

Maximum theoretical resolution in an optical microscope is equal to 1/2 wavelength of illumination = 1/2

The wavelength of blue light is 400 nm (=4000A) so the theoretical resolution from an optical is 1/2 400 nm = 200 nm = 0.2 um.

Ernst Abbe actually developed an equation to calculate the actual resolution of an optical microscope. Abbe's equation is

d = .612

n sin O


d = actual resolution (distance between two objects)

= a constant that ranges from .612 to 1.2 depending on factors of the lenses (the lower the value the better)

= wavelength of the illumination

n = index of refraction of medium between illumination source and lens

O = one half the angle of the cone of illumination accepted by the front lens of the objective lens

Together n sin O is often called the numerical aperture (N.A.) of the objective lens. The NA is often printed on the side of a light microscope's objective lens with the higher the value being an indicator of lens quality.

Thus for an excellent light microscope n = 1.5

O = 90o (1/2 of 180o)

= 400 nm

When we plug these values into Abbe's equation we get

d = .612 X 400nm = 163.2 nm This is greater than theoretical resolution!

1.5 X 1.0

In reality we do not have a 180o cone of illumination for any objective lens and the best value we would ever get for the numerical aperture is 1.4. When we plug into Abbe's equation with this value we get a resolution of 174.86 nm so in reality today's best light microscope's have reached their theoretical resolution!!

The next major advance occurred in 1923 when the French physicist de Broglie realized that electrons, when excited, can travel in a wave-like fashion identical to that of light. This opened the door for the development of a microscope that utilized electrons as its illumination source not light.

de Broglie equation can be used to calculate the wavelength of a beam of energized electrons:

= h

m v

= Wavelength

h = Planck's constant (6.624 X 10-27 erg/second)

m = mass of an electron (9.11 X 10-28 gram = 1/1837 of a proton)

v = velocity of the electron

We can reduce this equation to the following:

= 12.3

A Where V = kinetic energy in volts


So for a TEM operating at an accelerating voltage of 100,000 eV (100 KeV)

= 12.3

316.3 = 0.0389 A for the wavelength and theoretical resolution would be one half of this or 0.0195 A!

However, we must consider several other factors for the TEM. First, there is no change in optical density between the lens and source of illumination so the refractive index of a TEM will always be 1.0. Next the angle of the cone of illumination in a TEM is not that great. If we now plug the actual values and combine de Broglie's equation for wavelength with Abbe's equation we get:

d = (0.612) (12.3)

(1.0) (sin O) (316.3) = 2.4 A for a TEM at 100 KeV.

Electromagnetic Lenses text 140-147

In order to utilize the electron beam generated in the TEM we must somehow bend or influence the beam in order to focus it. When parallel rays of illumination strike or leave the surface of a lens at an angle other than 90o their direction is changed. This phenomenon which is known as refraction is due to the change in velocity of the light waves as they pass the boundary between media of different densities.

Fig. 16 Witz

In light optics this is accomplished by using a series of ground glass lenses that bend the incoming light to a desired focal point. Unfortunately glass lenses have no such effect on electrons and for this reason we must use a different type of lens. Basically there are two types of lenses employed in electron optics, electromagnetic lenses and electrostatic lenses. Both of these serve a similar function to glass lenses in that they do produce a deviation in the trajectory of the electrons from a point source which causes them to converge at a single focal point. One major difference however is that the the electrons do not pass through media of different densities so there are no sharp changes in the velocity of the electrons along the optical axis.

Electrostatic Lenses:

Electrostatic lenses take advantage of the fact that in opposites attract and like repels. Since all of the electrons in an electron beam carry a net negative charge we can take advantage of this in the electron microscope. If we establish a field which has a net negative charge all of the electrons entering into it will be repelled to a spot away from that field. Likewise they will be attracted to a positive charge. By having opposing positive and negative plates a uniform electromagnetic field is established and the electrons are thereby diverted.

[Diagram of + and - plate with field]

We have already seen an example of this in our filament gun assembly in which the electrons passing between the anode and cathode are somewhat focused by the magnetic field between them.

Electromagnetic Lenses:

The other way to influence or deflect the beam of electrons is through the use of an electromagnetic field. When two magnetic fields of opposite polarity are set up a magnetic field is established between them. Lines of magnetic force are set up between these fields and are referred to as the magnetic flux of the field. This is represented by the symbol phi and is defined "as the total number of lines of force about a conductor." This is the magnetic flux and it is equal to the magnetic field intensity. A related term but one that is quite different is the magnetic flux density represented by beta . This refers to "the number of lines of magnetic force per unit area" and in this way differs from the magnetic flux.

[diagram flux density]

A magnetic field can be created by passing electrical current through a conductor. The lines of force come off in a circular fashion perpendicular to the axis of current and follow the right hand rule. The total number of these orthogonal lines or the "magnetic flux" can be increased by increasing the current. The "flux density" increases the closer one gets to the center of the field radius (i.e. closer to the wire).

[diagram of wire and then of lens with flux density]

Related to this is the concept of permeability. Permeability refers to the ability of a material to attract magnetic lines of force and in this way increase the flux density around this material. Permeability represented by mu u is extremely low for things such as air and vacuum but extremely high for materials such as soft iron. This principle of magnetic permeability is taken advantage of in the TEM by placing soft iron cores into the center of the electromagnetic lens. These pieces have a much reduced diameter and are of extremely uniform composition and construction. The establish the poles of the electromagnetic field and are thus referred to as pole pieces.

[diagram pole pieces]

The flux density of any given lens can be calculated using the following equation.

Where = flux density

= permeability of pole piece

= number of turns of the conductor

= strength of the current

= length of the wire coil.

Since the magnetic field strength or flux is represented by the equation

The flux density can be given as B = u H Thus if air is used as the polepiece u = 1 so B = H.

One other phenomenon of electromagnets is that many materials exhibit a property known as hysteresis. In simple terms hysteresis is the lag time effect that occurs when a material is magnetized by way of a current and the latent magnetism that is present when the current is turned off. One other advantage of soft iron pole pieces is that they have a very low hysteresis.

When an electron enters a magnetic field it is influenced or diverted at a right angle to the lines of force in that field. If the magnetic field is strong enough it will continually divert the electron into a circular pattern.

[Fig. 26]

Electrons entering the magnetic field of a solenoid rarely encounter lines of magnetic force at right angles. Because both the flux density and the angled direction of lines of magnetic force increase near the perimeter of the solenoid those electrons that are near the edge of the lens are more profoundly affected than those that travel down the optical axis of the lens.

[Fig. 21]

Because the electrons in an electromagnetic lens have two forces acting on them (these being the downward force of the accelerating voltage and the force of the electromagnetic field) we can think of the motion of an electron as being the result of two different force vectors. As the electron enters one end of the solenoid the accelerating force vector moves it from one end to the other. As it deviates from the optical axis of the lens the magnetic field of the periphery it interacts with the lines of force and is deviated towards the center of the lens. Together these two force vectors cause the electron to travel in a helical path through the lens. Next time your on the TEM watch what happens to the image as you increase magnification (i.e. increase the magnetic field vector of the projector lens). It will rotate with each increase and then really rotate when the intermediate lens comes on.

TEM Lenses:

There are essentially three different lenses used to form the final image in the TEM. These are the condenser, objective, and projector lenses. In addition to these lenses an intermediate and second condenser lens are usually present. The primary function of the condenser lens is to concentrate and focus the beam of electrons coming off of the filament onto the sample to give a uniformly illuminated sample. The condenser lens is a relatively weak lens with a focal length of a few centimeters. The objective lens and its associated pole pieces is the heart of the TEM and the most critical of all the lenses. It forms the initial enlarged image of the illuminated portion of the specimen in a plane that is suitable for further enlargement by the projector lens.

[Fig. 38]

In order to provide the best image the objective lens must meet certain criteria. First, the focal length of the objective lens should be as short as possible (usually 1-5 mm is practical). This tends to minimize the effects of chromatic and spherical aberrations. Second, because the focal length is so short and that the specimen must be situated close to the focal plane of the objective it is necessary that the lens be made so that the specimen can be positioned right down into it. The specimen must be placed in a non-magnetic holder so that it does not influence the electromagnetic field. In most cases the magnification provided by the objective lens is about 100 X. Third, a very minimum clearance must be provided for inserting both the specimen and a physical aperture in or close to the objective lens. And fourth, space must be made for the placement of magnets used for correcting the astigmatism of the lens.

As has been mentioned before the TEM builds an image by way of differential contrast. Those electrons that pass through the sample go on to form the image while those that are stopped or deflected by dense atoms in the specimen are subtracted from the image. In this way a black & white image is formed. Some electrons pass close to a heavy atom and are thus only slightly deflected. Thus many of these "scattered" electrons eventually make their way down the column and contribute to the image. In order to eliminate these scattered electrons from the image we can place an aperture in the objective lens that will stop all those electrons that have deviated from the optical path. The smaller the aperture we use the more of these scattered electrons we will stop and the greater will be our image contrast. However it must be remembered that the smaller the aperture the smaller will be our lens aperture angle and when we plug into Abbe's equation the poorer will be our resolution. Thus we sacrifice resolution while we gain contrast.

[Fig 6-7]

Finally one uses the projector lens to project the final magnified image onto the phosphor screen or photographic emulsion. The projector lens has a broad range of focal lengths and a great depth of focus. Depth of focus is that region of space in which the object remains in focus despite slight changes in the focal length. This is partly a function of the lens aperture angle in which a smaller angle results in a larger depth of focus. It is in the projector lens that the majority of the magnification occurs. Thus total magnification is a product of the objective and projector magnifications. For higher magnifications an intermediate lens is often added between the objective and projector lenses. This lens serves to further magnify the image.

MT = Mo x Mp x Mi

The image is then projected onto either the fluorescent screen or onto the photographic film. Remember that the image is focused up at the objective lens. It is the focused image that is projected so the plane in which the final image appears is not critical and the image remains in focus regardless. What does change is the relative size of the projected image and thus the magnification on the screen and that on the photographic film will differ.

á Vacuum Systems Text 163-170

The final part of the TEM that needs to be covered is the vacuum system. There are three main reasons why the microscope column must be operated under very high vacuum. The first of these is to avoid collisions between electrons of the beam and stray molecules. Such collisions can result in a spreading or diffusing of the beam or more seriously can result in volatization event if the molecule is organic in nature. Such volatizations can severely contaminate the microscope column especially in finely machined regions such as apertures and pole pieces. These deposited materials will serve to degrade the image. A second reason is to avoid discharging between the cathode and the anode. There exists a very high voltage differential between these two components and stray air or gas molecules can act as carriers between the two. In conventional capacitors non-conducting oil or some other stable insulator is placed between the cathode and the anode. In an electron microscope the high vacuum serves this insulating purpose. Finally, the the area surrounding the electron emitter must be kept free of gas molecules especially oxygen. If it were not the life of the thermionic emitter would be greatly shortened and in the case of field emission we would not be able to generate electrons at all. Because electron microscopes must operate at what is referred to as high vacuum, some sort of pumping system must be employed that will allow most of the air to be removed from the column. Vacuum ranges can be broken down as follows:

Rough Pumping 103 - 10-3 Torr

High Vacuum 10-3 - 10-7 Torr

Very High Vacuum 10-7 - 10-9 Torr

Ultra High Vacuum Better than 10-9

In most TEMs vacuum is achieved through a combination of two types of pumps, mechanical and diffusion pumps.

Mechanical Pumps:

The mechanical, or roughing pump, is a relatively simple device. It consists of a acentrically positioned piston in which movable blades swing out from the piston and make contact with the side of the pump housing. The blades of the pump are either spring loaded or are extended by the centripetal force of the spinning piston. As the piston rotates the blades trap air and push it out through the exhaust outlet. The outlet valve and moving parts of the pump are bathed in oil that serves to both lubricate the moving parts of the pump but also to trap air molecules. Typically a good quality mechanical pump will achieve vacuum on the order of 10-2 Torr.

[Fig. N-2]

Diffusion Pump:

Diffusion pumps are so named because rather than actively "pull" air molecules out of a space they wait for the molecules to diffuse into a region of the pump where they are then trapped and removed. To accomplish this a liquid of some sort is brought to a boil. The hot liquid vapor then rises in the pump and is directed downwards by way of a baffle system known as a chimney stack. Stray air molecules that diffuse into this portion of the pump collide with the vapor molecules and are trapped. The combined molecules continue their downward path until they contact the sides of the pump which are kept cool by way of a series of cold water tubing that surrounds the pump. The liquid vapor then condenses and drops to the bottom of the pump where it is once again heated to boiling. As it boils the liquid molecule releases its trapped air molecule. This is then removed by way of the mechanical pump that is connected to the base of the diffusion pump. Although special diffusion pump oil (not mechanical pump oil) is usually the liquid of choice for a diffusion pump other liquids such a mercury are also used. Sometimes several diffusion pumps will be used in tandem. Used in conjunction with a mechanical rotary pump a diffusion pump can achieve vacuum of 10-6 Torr or better.

[Figs 59 & 58]

In addition to mechanical rotary pumps and oil diffusion pumps there are several other types of vacuum pumps employed in electron microscopes.

Sputter-ion Pumps:

This type of vacuum pump is capable of producing ultrahigh vacuum without the use of pumping fluids such as oil. Because backstreaming of oil molecules can be a problem in a TEM it is often desirable to use such a pump. A sputter-ion pump functions by establishing an electrical potential in a strong magnetic field. Energized electrons collide with air molecules and ionize them. The gas ions then bombard a titanium plate and cause the titanium atoms to be knocked off, or sputtered. These titanium atoms are deposited on the anode plate and form stable bonds with gas molecules. A metal that binds gas molecules is called a "getter" and another name for these pumps is ion-getter pumps. These pumps produce clean vacuum down into the range of 10-9 Torr.

Orbitron Pumps:

Orbitron pumps are also oil free pumps. They use an electrostatic field to achieve high pumping and sublimation to free the getter material. Electrons are produced by two filaments. The electrons heat titanium cylinders and cause the metal to sublimate. The titanium vapors are deposited onto the outer cylinder wall and trap stray gas molecules. These pumps generally pump down to the 10-6 Torr range.

Turbomolecular Pumps:

Turbomolecular pumps consist of a series of turbines arranged in such a way as to produce an ever decreasing pressure differential from one chamber to the next. At full speed turbomolecular pumps rotate at about 15,000 rpm and sound a bit like a jet engine. They have the advantage of not using an oil as the pumping agent and thus avoid the possibility of backstreaming. They also have the advantage of being used at vacuums ranging from atmospheric pressure all the way down to 10-7 Torr.

Cryogenic Pumps:

Cryogenic pumps use a metal surface cooled to ultracold temperatures to trap air molecules that randomly come in contact with it. These pumps are used only in conjunction with other pumps due to the fact that water condensation will occur at higher pressures. Most TEMs use a form of cryogenic pump known as a cold trap. It is cooled with liquid nitrogen by way of a thermal cold finger and is placed near the specimen. Gas, oil, or dirt molecules that are hit by the beam can volatilize and contaminate the inside of the column and or specimen. A cold trap offers little pumping action but does help to keep the column clean. When the metal surface warms the trapped gas molecules are released. Cryogenic pumps can increase the vacuum from 10-6 Torr to 10-9 Torr.


One method used in the localization of biomolecules in TEM is the use of autoradiography. Very briefly, this process uses a radioactively tagged molecule in a specimen and a sensitive film or emulsion overlying the cut sections to reveal the presence and distribution of tagged molecules.

Radioactivity is the natural process by which radioisotopes (elements with higher than normal atomic wieght) become more stable elemental isotopes by emitting charged particles. If electrons are given off it is referred to as ionizing radiation. If only an electron is emitted it is referred to as beta emission. If electrons plus protons are given off it is defined as alpha emission. Some beta emitters that are commonly used in the study of biological systems are tritium (H3), C14, P32, S35, and I131. [normal At. Wts. for these elements are H1, C12, P31, S32, I127] These isotopes have differing specific activities (# curies per mMole) and half lives (amount of time for 1/2 of sample to decay). Savanah River plant is nation's only source of tritium which has a half life of 12 years.

[table 11-1]

Special emulsions have been developed that when struck by the energy of an emitted electron impart a negative charge to silver sulfide molecules in silver bromide crystals. This allows the silver bormide crystal to become reduced to metalic silver during the development process and therefore visualized as a molecule of black silver in an otherwise clear emulsion. The fixation step removes all the undeveloped silver bromide grains from the emulsion so that they do not independently become reduced themselves and thus darken over time. This happens with incomplete fixing of a photographic emulsion.

In autoradiography the biomolecule of interest is radioactively labeled. Often this involves using tritiated compounds but can involve the use of Carbon 14 or Phosphorous 32. The labled molecule is allowed to incorporate into the cell ususally before the specimen is fixed. Unlike many other cytochemical techniques, autoradiography does not require special fixation, dehydration, or embedding procedures and thus structural preservation can often be better than with some other localization techniques. Following conventional fixation and sectioning the grids are covered with a silver halide emulsion similar to the ones used in photographic films and papers. Because these emulsions or films can be senisitzed by light this must be done in a darkroom under safelight conditions. The emulsion remains in contact with the specimen for a period of hours during which time the beta emission of the tagged molecule reacts with or "exposes" those silver grains in the nearby vicinity. The grids with sections and emulsion are then processed and the exposed silver grains are developed while the unexposed grains are removed during fixing step. This results in small thread-like silver grains being present over the tagged molecule. Finally the emulsion gel is removed leaving only the section and exposed silver grains. These sections can then be further post-stained with uranium and lead and looked at in the TEM.

[Fig. 96 Witz]

There are a number of problems associated with autoradiography. Among these are the fact that one must work with radioactively labeled materials and learn to coat grids in near total darkness. A second problem involves the spatial resolution of the label. The exposed silver grain appears as a thread and not as a discrete spot. Also, because the beta emission does not always follow a straight path silver grains that are in the vicinity may be exposed as well as those that are closest to the tagged molecule. One major advantage however is that single atoms can be labled. Anyone interested in the fate of say a carbon atom in a plant cell as it goes from being labeled CO2 to being incorporated into higher compounds.

Electron Energy Loss Spectroscopy (EELS) text [344-347]

and Electron Spectroscopic Imaging (ESI)

When an electron of the primary beam interacts with the elements of a specimen one of several things can happen. First, it may pass by without altering either its energy (wavelength) or trajectory. These non-scattered electrons are what are primarily responsible for creating the bright portion of a TEM image as they strike either the phosphor screen or emulsion of the film. Second, an electron may pass near the nucleus of the atom and be attracted by the positive charge. This will result in a change of trajectory (scattering) but will not result in any loss or decrease of energy (no change in wavelength). Such elastically (no loss of energy) scattered electrons may contribute to the final image if the change in trajectory is not so severe that they are eliminated by the aperture of the objective lens. Third, a primary beam electron may interact with one of the electrons in the atom and lose energy to it (inelastic collision). This results not only in a scattering of the electron but also a change in its wavelength. When such an inelastic scattering event takes place with one of the inner orbital electrons (K, L, or M shell) the energy that is lost by the primary beam electron is very specific and like a characteristic X-ray contains information about the element that produced it. In a conventional TEM the scattered electrons (both from elastic and inelastic collisions) serve to degrade the final image as they strike the recording surface and cannot be distinguished from nonscattered electrons. A small diameter aperture eliminates many of these and increases our image contrast but degrades our resolution by reducing the angle of illumination.

[diagram here]

Electron Energy Loss Spectroscopy (EELS)

We can take advantage of those inelastic collisions that take place with inner orbital electrons in one of two ways. The first involves using a type of magnetic prism to separate those electrons that are still traveling at their original velocity Eo from those that have been inelastically scattered Ei = Eo - E. If one scans the specimen in a point by point fashion (STEM) the electrons that are produced at that point can be focused onto a magnetic prism. This then further focuses the electrons to different focal points depending on their energy. If an aperture or slit is placed in this focal plane and positioned over the point in which the electrons with an energy of Eo are focused, and then detected using a scintillator and photomultiplier tube (PMT) similar to what is used in an SEM, the relative quantity of electrons can be converted into a bright or dark pixel on a CRT. By moving the aperture so that it coincides with those electrons that have been slowed by a specific amount one can create an image that is indicative of where the elements are localized that would slow the primary electrons by that amount. EELS detectors can be put onto most commercial STEM and are placed beneath the film recording camera.

Electron Spectroscopic Imaging (ESI)

Like EELS, ESI takes advantage of the fact that electrons can be slowed by very specific amounts depending on which elements (and which electrons) they interact with. One company, Zeiss, has taken advantage of this and incorporated an magneitc prism into the column of the TEM. By doing this they can also separate the polychromatic (many wavelengths) beam after it has passed through the specimen. The main advantage is that by placing a discriminating aperture or slit above the 2nd projector lens of the TEM they can create a typical TEM image that can be recorded on film. With an ESI system it is not necessary to scan the image and thus images that contain information about the elemental composition can be created that have higher resolution and require less time to create than either X- ray maps or EELS. The limiting aperuture can thus create higher contrast images by eliminating inelastically scattered electrons without decreasing the angle of illumination and therefore resolution. We can also increase the accelerating voltage to exactly match that of the element we are seeking and thus create a photographic image of that element's distribution in the specimen. In both EELS and ESI it is essential that the specimen be extremely thin for a primary electron that interacts with more than one atom will no longer contain specific information about that interaction.