In order to visualize a specimen in the TEM it is necessary to create what is known as image contrast. By this we mean that there must exist regions of varying electron opacity such that differences can be detected and therefore information about the structure of the specimen can be discerned. This is accomplished by the differential scattering, deflecting, and stopping of illuminating electrons. For an object to have a level of electron opacity there must be enough nuclear mass present to accomplish this deflection of electrons. A thick biological specimen meets this requirement by having a large number of relatively low atomic weight atoms aligned relative to the incident electron beam.
A second way of accomplishing this is to have a relatively thin layer of high atomic weight elements aligned relative to the incident beam.
This same principle is utilized in the positive staining of biological specimens to selectively give electron contrast to different portions of the specimen. The primary distinction between positive and negative staining is that in positive staining the stain forms a complex with the specimen whereas in negative staining the stain and the specimen do not react with one another. Also as the name implies a positive stain will impart increased electron opacity to the specimen creating a darker specimen whereas in negative staining the specimen remains more electron translucent relative to the surrounding stain. By pooling up around the edges and crevices of the specimen and not as much on the top portions of the specimen an image of differential contrast of the specimen can be made.
Negative Staining Cont'd
One of the limitations of negative staining is that only information about the microtopography of the specimen is produced. Little or nothing is learned about the internal structure. Some of the main advantages of negative staining over conventional preparation of biological specimens are:
1) Improved resolution - Often a resolution of 10 A or less is possible under the right conditions with negative staining. This is a vast improvement over conventionally sectioned material in which spatial resolution is also limited by z-dimension or section thickness.
2) Speed - The process of negative staining is exceptionally fast and it is possible to go from living organism to observing it in the microscope in only a matter of minutes. This is in contrast with conventional preparation that can take days to complete.
3) Unique Information - Not only can detailed information be learned about the topography and therefore the three dimensional nature of the specimen, but samples can be examined that could not be visualized in conventional preparations. These include selectively isolated components from cell fractionation (nuclear, RER, SER, chloroplast, mitochondria, etc.) as well as specially prepared or isolated biomolecules (ribosomes, DNA, specific enzymes, glycoproteins, etc.). Also specimens that do not lend themselves to conventional preparation (isolated viruses, bacteria, non-biological specimens, etc.).
4) Simplicity - Little in the way of special equipment or reagents is needed.
There are however a number of disadvantages to negative staining.
1) Repeatability - The technique is straight forward but can often yield greatly varying results both between samples and even on the same grid.
2) Limited to surface topology of small structures. There are many applications where negative staining has no value.
3) Toxicity - While this is no more dangerous than in conventional specimen preparation the heavy metal stains used in negative staining are highly toxic.
The choice of negative stain is usually based on four criteria:
A) Stain should be of high density to provide high contrast. This usually involves the use of some heavy metal with great electron stopping or scattering capabilities.
B) It should have a high solubility (80g/100 ml) so that it does not come out of solution during the final stages of drying. It should also have minimal reactivity with the specimen at the concentrations used.
C) Should have a high melting and boiling point so that it does not volatilize under the beam. Since it will absorb much of the beam and its incident energy it must be beam stable.
D) The precipitate formed must be of extremely fine grain so that it appears as amorphous down to the limit of resolution. It is difficult to visualize a ping- pong ball surrounded by dark bowling bowls.
Some of the best materials for negative staining are phosphotungstate (and other phosphotungstic acid salts), sodium tungstate, uranyl acetate, and uranyl nitrate.
A generalized procedure for negative staining is as follows:
A) An electron transparent support film is produced on which to deposit the specimen. Often this is a Formvar or collodion film. The support film is often coated with a thin layer of carbon which adds rigidity and strength to the film but can also produce a hydrophobic film that will inhibit the even spreading of specimen and stains. Others prefer a pure carbon film because of its finer grain size and over come the hydrophobicity problem by using a glow discharge unit or other means of reducing the hydrophobic effect.
B) A thin suspension of the specimen is placed on the film covered grid and all but a tiny excess is removed with a small piece of filter paper. The remainder is allowed to dry completely to the film. Care must be taken that the concentration of specimen not be too great or too low. If excess salts are deposited during the drying process they must often be removed by post wetting the grid after attachment to the film or by resuspending the specimen in a salt free medium immediately prior to deposition. Often a spreading agent such as photoflo or 0.4% sucrose is added to the specimen, stain or both to insure a more even distribution of material.
C) After complete drying of the specimen a thin layer of negative stain is similarly applied, removed and allowed to dry. Just the right amount of stain must be allowed to dry to reveal but not obscure the structure of interest. The pH of the stain as well as the length of time of application before drying will determine to what extent positive staining is eliminated although it always contributes a small amount to the final image. It often takes the making of many grids and looking at many different grid squares on each grid to find that portion where all the ideals coincided.
D) Once the stain is completely dry the grid may be examined in the TEM. Samples are generally stable and can be stored desiccated for many months or years.
Replicas and Shadowing
A second method for examining the surface topology and structures of specimens in a TEM employs shadowing techniques. In this case the image contrast is produced by the uneven distribution of fine metal particles. Once again electron dense metals are the coatings of choice and platinum, chromium, palladium, uranium, and gold are some of the more commonly used metals for shadowing. Also, as the name implies information about the surface topology is gained by creating a shadow effect which is directly proportional to the microarchitecture of the specimen. This is accomplished by depositing the coating metal from a low angle (5 - 30 degrees) relative to the general plane of the specimen. The greater the height of portions of the specimen the larger will be the resultant shadow. The contrast difference created by a shadow that is created is opposite to a shadow produced by sunlight.
In interpreting a shadowed preparation it is important to know the direction from which metal was deposited. In fact if the angle and direction of the shadowing source are known relative to the specimen the height of the specimen can be calculated using the equation:
H = tan O X l Where H = height of specimen
O = angle of shadowing
l = length of shadow
or H = b/c X l Where b = Height from level to source
c = Distance from sample to source
[fig L-2 Wischnitzzer]
Shadowing may be done from a fixed angle (static shadowing) or on a rotating specimen (rotary shadowing). Rotary shadowing allows one to resolve portions of the specimen that might otherwise have been obscured by the shadow.
As with negative staining resolution in the TEM of shadowed specimens is dependent on the grain size of the deposited metal. Basically there are three methods of depositing thin metal films for shadowing preparations these being a) heated electrodes, b) electron beam gun (often called an e-gun or electron gun), and c) cathodic etching.
A) Heated Electrodes - With heated electrode evaporation the material to be deposited is heated by passing a large electrical current through it while maintaining it under high vacuum conditions (10-6 to 10-7 Torr). The material then begins to volatilize (boil) and is evaporated in all directions into the vacuum chamber. Some of the metal particles will strike the specimen and create a shadow depending on the topography of the sample and the angle of the incoming particles. The most common device for accomplishing this is a vacuum evaporator and this is still the most common means of depositing metal or carbon. The higher the vacuum at the time of evaporation the finer will be the grain size. For this reason liquid nitrogen is often added to the system to act as a cryogenic pump immediately before shadowing.
B) Electron Beam Evaporation - This technique is similar to the heated electrodes method only in this case electrons emitted from a surrounding tungsten filament (which emits electrons due to thermionic emission) strike the target and causes it to heat. The fine particles are then emitted from the source and are free to strike the specimen. Once again this type of deposition takes place under high vacuum conditions in vacuum chamber. Because electrons are the source of heat in these deposition devices they are often referred to as electron guns or "e-guns" but should not be confused with the electron gun assembly that is the source of imaging electrons in a TEM.
C) Cathode Etching (Sputtering) - In Cathodic Etching ionized molecules of an inert gas (usually high purity argon) are focused and accelerated to bombard a cathode target. The target consists of a thin foil of high purity heavy metal (gold or gold/palladium). The gas ions displace molecules of metal from the target which are then free to go toward the specimen (sputter) and coat it. Because little or no heat is generated in the process cathode etching is also known as a "cold" source technique. Unless special equipment is used the size of the deposited metal grains in sputtering are often quite large and although may be suitable for SEM are not suitable for high resolution TEM imaging.
Shadowing is used on many of the same types of samples and for many of the same reasons as is negative staining. As with negative staining only information about the surface of the specimen is really obtained. One often goes through the trouble of shadowing (as opposed to just negative staining) because of the added resolution that can be obtained, especially with low angle rotary shadowing. Shadow casts can be made of any stable dried organic or inorganic molecule of organism that will not change shape under high vacuum conditions. The shadow cast can be made on an intermediate substrate such as a piece of mica and then removed or directly on a Formvar or carbon film on a grid which is then placed directly in the TEM. It is common to deposit the electron dense metal from a predetermined angle to create the shadow effect and then to evaporated from directly above, a fine layer of carbon which does not add much electron opacity but does provide strength to the shadow cast, particularly in regions where no metal was deposited.
A modification of shadow technique is known as replication. In forming a replica many of the same steps employed in creating a shadow cast (metal and carbon deposition under vacuum on an intermediate substrate) are used. The shadow cast is then removed from the substrate by floating on water and the pieces placed in a solution to remove the biological or mineral sample. Strong acids (hydrochloric, chromic, hydrofluoric) or bases (sodium hypochlorite) are used, sometimes in succession, to dissolve away the original biological material and leave only the metal/carbon cast or "replica" of the original specimen. This is often extremely useful in that the original material may have been electron dense enough to prevent visualization of the fine shadow produced on the surface of the specimen. It is also important in making a replica that there be sufficient carbon deposited to make the replica strong enough so that it will hold up in the TEM. The tiny floating replica fragments are rinsed in water and picked up on naked 300 mesh grids and examined in the TEM. Thus there is no support film present as there is in shadow casts.
A modification of the replica technique is when a replica is made of a frozen sample. This is known as freeze etching or freeze fracture. We will discuss this technique when we cover cryobiology.
In some cases the sample may not lend itself to direct replication and in this case a two step replica (negative replica, reverse replica) may be made. This is done by first making a plastic replica of the specimen by applying liquid plastic to the original specimen. After the plastic hardens the specimen is then removed from the either by peeling or dissolving. The first stage plastic replica is then subjected to metal and carbon deposition as before and the plastic removed from the second stage replica by dissolving in an organic solvent. The metal/carbon replica is then examined in the TEM. Cases in which one might make a two stage replica include rare or large specimens that cannot be sacrificed or specimens that must be used for a second purpose.
In terms of resolution shadow casting, especially low angle rotary shadowing, can equal or exceed the resolution capable from negative staining. Replication is really the only technique available for examining the surface features of an electron dense specimen in the TEM.
One alternative to standard chemical fixation is the use of low-temperature methods otherwise known as cryopreservation. In cryopreservation samples are rapidly frozen and then further processed using a variety of techniques.
Essentially the same goals of standard fixation apply here namely to arrest cellular processes rapidly and preserve the cell in as near to the living state as possible. We have a lot of confidence that this is the case with cryopreservation as it has been shown that rapidly frozen cells can remain viable following warming. Cryopreservation offers a number of advantages over conventional fixation among these are:
1) Rapid arrest of cellular processes. One is not dependent on the speed of penetration of the fixative. (milliseconds vs. seconds)
2) Avoidance of artifacts induced by changes in osmolarity, pH, or chemical imbalance.
3) Because cellular constituents are not subjected to biochemical alterations they remain in more of their natural configuration. Labile components are retained and antigenicity is usually improved.
4) Cells can be examined without introduction of other possible artifacts caused by dehydration or embedding.
5) One can examine cellular domains that might otherwise be inaccessible (e.g. IMPs) or from a view that is usually not possible (e.g. 3-D view via deep etch).
There are however a number of disadvantages as well and among these are:
1) The need for specialized freezing and processing equipment (-80 freezer, cryoultramicrotome, freeze fracture device, etc.)
2) Freeze damage due to poor freezing rates.
3) Limited view of specimen and or difficulty in manipulating the frozen material.
The major obstacle to good cryopreservation is the introduction of artifacts due to formation of ice crystals that disrupt the cellular structure. The goal of rapid freezing is to prevent the formation of ice crystals and preserve the aqueous component of the cell in near to the vitreous state. Vitreous refers to glass or glass like, and just as glass is really a supercooled liquid and not a solid, water can also exist in this quasi-solid state. In general this is very difficult to accomplish with biological samples and usually we simply strive to keep ice crystal formation to a minimum which is often defined as whether or not the crystals are visible in the electron microscope. This cannot be accomplished by simply putting the sample in the freezer.
Perhaps the most important aspect of rapid freezing is the choice of cryogen or freezing medium. A good cryogen should have several properties.
1) Low freezing point - need to have a good thermal gradient between the sample and the cryogen.
2) High boiling point - must minimize the formation of a vapor barrier near specimen due to latent heat of sample. The formation of an insulating vapor barrier around the sample is known as the leidenfrost or "bad frost" phenomenon and prevents the cryogen from making direct contact with the surface of the sample. This tends to slow the freezing rate and produce ice crystals.
3) It should have a high heat capacity and thermal conductivity (latent heat). In plain terms it should be able to absorb heat without increasing its own temperature. Because of this low molecular weight liquids such as N2 and He tend not to very good cryogens.
Cryogen melting pt. boiling pt.
Freon 22 -160 -40.8
Freon 13 -181 -81.1
Freon 12 -155 -29.8
isopentane -160 27.85
propane -189 -42
nitrogen -209 -196
ethane -183 -88.6
helium -272 (1o K) -268.9
An alternative to liquid cryogens is the use of a nitrogen slurry or slush. By lowering the pressure of liquid nitrogen it can be induced to freeze and become a solid. When brought back to room pressure the liquid and solid nitrogen exist side by side. Just as a glass of ice and water remains at 4 degrees longer than does a glass of pure 4 degree water, the nitrogen slush has a higher latent heat and can thus absorb more heat from the sample before boiling. This reduces the leidenfrost effect and improves freezing rates.
The rate at which a specimen freezes is usually the determining factor in the amount of ice crystal formation and subsequent damage there is. Slow freezing rates such as 1 C/min results in significant damgae. The extracellular water freezes first and pulls out the water from the cell as the concentration gradient changes. In general cells do not contain large amounts of unbound water so the formation of very large ice crystals usually does not happen but the specimen can become shrunken and distorted.
Rapid freezing is usually defined as a change in temperature in excess of 10,000 C/sec. (vs. 1 C/min). One of the major problems associated with rapid freezing is the total amount of heat that must removed from the specimen. If internal heat from the specimen continues to warm those portions that are cooling it will prevent the water from undergoing a rapid phase change and large ice crystals can form. For this reason the size of the specimen should be kept to a minimum regardless of the freezing method used and the specimen carrying device should be made of a small amount of material that has excellent thermal conductivity. Thin pieces of copper or gold are usually used.
Specimens are then rapidly placed or "plunged" into the cryogen and held there for 20 - 30 seconds. It is important that the specimen be as small as possible as good freezing will only occur on the outer surface and one wants to reduce the heat load placed on the cryogen. Plunge freezing is best used on very small specimens or cell suspensions.
One problem associated with plunge freezing is the fact that as the cryogen removes heat from the specimen it begins to warm up. This is a localized effect but results in either a decrease in the thermal gradient between the cryogen and specimen or even worse in the formation of leidenfrost. To avoid this it is desirable to have a fresh supply of cryogen constantly moving over the sample and taking away any excess heat with it. This can be done by either moving the sample rapidly through the cryogen (projectile freezing) or moving the cryogen past a stationary specimen. This is the theory behind jet freezing. The most commonly used cryogen for jet freezing is liquid propane and the device is known as a propane jet freezer. Basically the unit operates by putting the specimen on a very thin support foil or holder and then placing it between two thin pipe with opposing ports. Liquid propane (which was liquefied by a bath of liquid nitrogen) is stored in a bomb underneath the output ports and is then forced out from the ports under great pressure by introducing dry nitrogen to the the propane bomb. Two opposing streams of liquid propane the hit the specimen from both sides and carry away the excess heat. Cooling rates of 30,000 C/sec have been claimed for propane jet freezing and heat exchange is 2 - 30 times faster than with plunge freezing alone. These are dangerous to use and we are experimenting now with a device I helped to design which uses six ports (3 above, 3 below) that uses a stream of liquid nitrogen.
A second alternative to rapid freezing samples with liquids is to bring them in rapid contact with a very cold surface. Although this will result in severe ice damage in the sample that is not immediately in contact with the surface, it can produce excellent results in the region immediately adjacent to the surface. Contact freezing is accomplished by pre-cooling a large metal block (usually polished copper, brass, or gold) and then rapidly bringing the sample in contact with the block. Because latent heat and leidenfrost is not a concern in this method one simply wants to create the largest thermal gradient possible. For this reason liquid nitrogen or even better liquid helium is used. The primary reason that most researchers choose to use liquid nitrogen is that it costs approximately 45 cents per liter whereas liquid helium costs $200 per liter.
One problem with bringing the sample in contact with the block is the possibility that it will bounce and thus damage the specimen. For this reason a special freeze slamming device is used that has a glycerol hydraulic damping system to drop the specimen onto the block but prevent it from bouncing. A modification of this procedure involves grabbing the specimen between two precooled metal surfaces. These cryopliers are widely used in cryopreservation of specimens such as muscle fibers.
A modification of surface freezing is known as spray freezing. In spray freezing the sample in the form of a suspension is spray or atomized onto a precooled metal block or into a cryogen. This avoids the problems of bouncing and keeps specimen size to a minimum (1 ul or less volume). It has the disadvantage that the specimen must be one that can be sprayed and is often difficult to handle afterwards as it must be collected without rewarming the sample.
The latest in freezing devices is known as a high pressure freezer. At extreme pressures of 2100 bar (Bar = 1 ATM = 760 mm Hg) the nucleation of ice is significantly reduced. A second thing that happens is that the melting point of water is lowered to - 22C (vs. 0 C at 1 ATM). This is one reason that cold water on the ocean bottom does not freeze. At these pressures the critical cooling rate is raised to 100 C/sec (vs. 10,000 C/sec at 1 ATM). The device works by initially pressurizing the chamber with isopropanol followed by liquid nitrogen. Because the cells are pressurized for only a few milliseconds before the LN2 is introduced they are generally not harmed too much. LN2 can be used because at these pressures it will not boil so no leidenfrost is formed.
Device freezing depth cost
Plunge freezer 10 - 20 um $ 0.50 - 50
Spray Freezer 10 - 20 um $ 10 - 50
Slam Freezer 20 -40 um $ 2000
Propane Jet 40 um $ 10,000
High Pressure 50 - 100 um $ 150,000
Regardless of the freezing method used many specimens are treated with a cryoprotectant to reduce the possibility of ice damage. Cryoprotectants function by both increasing the number of ice nuclei and retarding the growth of ice crystals. By either binding to water molecules or substituting for water molecules cryoprotectants reduce the number of water molecules available for binding to growing ice nuclei and thus greatly slow the growth of these crystals. Generally cryoprotectants are viscous and in this way they also slow down the rate of diffusion of water from the specimen as the exterior water freezes. This helps to reduce the shrinkage effects of slow freezing. Some commonly used cryoprotectants are glycerol (penetrating type) or sucrose (non- penetrating type) and are generally used in concentrations of 10-30%. One of the disadvantages of cryoprotectants is that it has been shown that extensive exposure to cryoprotection can alter the internal structure by applying osmotic pressure to the cytoplasm. Usually marine organisms have a number of dissolved salts in the medium which act as cryoprotectants and often these can be frozen without further cryoprotection.
One of the things that can be done with rapidly frozen samples is to replace the aqueous component of the specimen with an organic solvent without allowing the to change from its frozen arrested state. During the freeze subsitution process a rapidly frozen sample is held for one to two days in a vial of organic solvent at -80 C. Over this time period the frozen water molecules are replaced or "substituted" by molecules of the organic solvent. This happens despite the fact that the water is never allowed to return to the liquid state. Acetone is usually the solvent of choice although ethanol and methanol have been used as well. The organic solvents have some fixitive properties of their own which can be enhanced by the addition of standard fixitives such as osmium tetroxide. Recently anhydrous glutaraldehyde has become available for use in organic solvents during freeze substitution. Thus the cells are chemically cross linked and fixed before their components have an opportunity to change from their frozen positions. The samples are then gradually brought to room temperature (done slowly to prevent renucleation of ice crystals), the fixitive, if any, rinsed out with pure organic solvent, and infiltrated and embedded as usual. Thus in freeze substitution the fixation and dehydration steps are combined into a single step.
One great advantage of rapid freezing and freeze substitution as oppposed to standard chemical fixation is that many of the artifacts associated with chemical fixation can be eliminated or greatly reduced. A prime example of this is in the study of membranes and membrane bound organelles. The length of time a fixitive takes to penetrate a cell and the changes it induces in terms of periability often results in shrinkage or wrinkling of membranes and membrane bound organelles. If one compares these to chemically prepared cells the smoothness and roundness of freeze substituted material is quite surprising. Also rapid cellular processes such as the fusion of membrane bound vesicles can be captured because although the fusion process itself is very rapid, the freezing rate is even faster.
A second great advantage of freeze substitution is seen when one uses the fixation properties of the organic solvent alone to preserve the cell. This has the great advantage of hlding all cellular components in place while at the same time not cross linking the cell so completely that not cytochemistry can be done. In fact cells preserved in this way have better ultrastructural preservation and greater ability to react in cytochemical treatments than any other method. A variety of methacrylate resins have been developed which facilitate immunocytochemical processing of cells including Lowicryl which remains a liquid down to - 40 C and can be polymerized at that temperature using U.V. light. Thus cells are freeze substituted, infiltrated, and polymerized without ever regaining the unfrozen state. Cell structures and biochemicals can therefore be preserved in nearly their native state.
Freeze Drying & Distillation:
A modification of the freeze substitution process is known as freeze drying or in some cases as "cryodistillation." In freeze drying the rapidly frozen specimen is held cold under vacuum and its water is allowed to sublimate (go directly from solid to gas). Once all the water has been removed a low termperature embedding resin (Lowicryl) is introduced, allowed to infiltrate under vaccum and eventually polymerized and sectioned. Cryodistillation has the advantage that water soluable components are not extracted from their nave position during the substitution process and thus much can be learned about the natural biochemical composition of the cell.
Yet another technique that can take advantage of rapidly frozen specimens is cryosectioning or "cryoultramicrotomy." In cryosectioning the specimen is sectioned while still in the frozen state and before any post processing (substitution, distillation, etc) has been done. Frozen sections are thin enough for examination in the TEM and this can be done either on the cold sections using a cryotransfer system which keeps the sections at liquid nitrogen tempertures or on warmed specimens that have been allowed to dry down onto a grid. Generally the ultrastructural preservation of cryosectioned material is quite poor. The primary reason for using cryosections is the enhanced antigenic reactions that one can get from unfixed, unembedded material. The major drawback (other than poor structural preservation) is that cryosections are exceptionally difficult to make and the technique and equipment needed are tough to master and expensive. Despite this cryoultramicrotomy can allow one to immunolocalize structures at the TEM level that would otherwise be impossible to do with conventional methods.
At times it is important that one examine a replica of a specimen that has not been dried but rather is in the hydrated state. For these applications one would use the technique of freeze etching or freeze fracture. The key element of freeze fracture is that the platinum/carbon replica is made on a frozen specimen that is contained within a vacuum evaporator. In those cases where actual fracturing of the specimen is important a mechanical microtome that can be cooled to liquid nitrogen temperatures and operated within the vacuum evaporator is also employed. As could be expected, these specialized vacuum evaporators or Freeze-fracture devices, are quite expensive often costing as much or more than the TEMs for which they prepare specimens.
Freeze fracture operates on the principle that a specimen that is held in place frozen in ice can be treated like a solid rigid structure and broken or fractured in various regions of the specimen. These newly fractured surfaces may run along the original surface of the specimen but are more likely to pass through the internal portion of the specimen. Thus a replica made of these newly exposed surfaces can reveal important information about the internal composition of a specimen, not just the exterior as in normal dry shadow casts or replicas. As with other cryotechniques the size of the ice crystals formed is especially important in freeze fracture and specimens are usually prepared using one of the rapid freezing techniques previously discussed (plunge freezing, jet freezing, slam freezing, high pressure freezing).
To prepare a freeze fracture replica a small amount of the sample is placed on a small metal carrier sometimes referred to as a "hat." These hats are often made of gold due to the ability of this metal to conduct heat rapidly away from the specimen. The hats are then rapidly frozen and stored in liquid nitrogen until ready for use. In the mean time the freeze fracture device is warmed up and brought down to high vacuum using a diffusion/mechanical pump system. The cold specimen on hats are then rapidly transferred to a stage which has been cooled under vacuum by liquid nitrogen flowing through the stage. The chamber is then rapidly pumped down again while the stage and specimens remain at LN temperatures. Now the microtome arm assembly with attached razor blade is cooled to -195 C with LN while the stage and specimens are gradually raised to about -100 C. The cooled knife is then rotated over the specimen until contact is just made and thin shavings are removed from the top surface. These shavings are not sections and the specimen is not so much sectioned as it is scraped. Although a razor blade is used the analogy is closest to a huge snow plow clearing a snow covered dirt road. As it makes contact small pieces and chunks are torn loose from the road revealing exposed frozen surfaces. After a sample has been scraped and a clean surface exposed the sample is often "etched" for a period of 1-3 minutes. During this process the cold knife hovers above the fractured specimen while both are held under vacuum. The combined effect of high vacuum and a temperature differential (-150 vs. -100) causes some of the frozen surface water of the specimen to sublimate (go directly from water to gas) and be removed by the vacuum system. As this happens the non-aqueous components of the specimen become more an more prominent relative to the flat background. A variation on this technique involves deep etching followed by rotary shadowing. Using this technique large relief images can be created of structures that are only visible in the TEM.
A modification of this technique is known as double replica or complementary replica formation. In this process the sample is initially frozen sandwiched between two planchets which are then inserted into a special precooled holder. This holder is then flipped apart while on the cold stage and the specimen is split in two exposing matching surfaces. A replica of each surface is then made and examined. In this way both surfaces can be viewed whereas the opposite surface is scraped away in conventional fracturing.
One of the most useful and widespread applications of freeze fracture is in the study of biological membranes and their various protein components. To understand why we need to look at how a biological membrane is organized. Basically all biological membranes are composed of two layers of phospholipids arranged so that their hydrophobic regions face one another. Embedded in this phospholipid sandwich are intramembranous particles (IMPs) which are proteins or protein complexes that span from one hydrophilic side of the membrane to the other. In addition to these IMPs there may or may not be additional protein complexes that are embedded in one half or the other of the membrane.
When a cooled razor blade contacts a frozen specimen the membrane selectively splits apart at the hydrophobic junction. This occurs because at reduced temperatures the energy needed to split the hydrophobic junction of the membrane is less than that needed to split the ice or aqueous components of the cell. A replica made of a fractured surface typically reveals large portions of the internal region of biological membranes. In fact, freeze fracture is about the only technique available that allows one to visualize the hydrophobic regions of membranes. Of course other structures such as nuclei, flagella, and cell walls are also fractured during this process.
As difficult as it is to make a good freeze fracture replica, it is often even more difficult to interpret one. Part of the reason for this is made clear in looking at the following illustration. Conventional scientific illustration usually places the light source in the upper left hand corner of the image at an angle of about 45 degrees relative to the specimen. Most SEMs follow this convention when designing the scan pattern, detector position, and display monitor. Based on this we conclude that an object is convex when the dark shadow produced by light is in the lower right hand corner of the image and concave when it falls in the upper left hand corner. Because cells are mostly composed of spherical vesicles and curved membranes, the freeze fracture image is a case study in this type of illustration. The first problem that one then encounters in interpreting freeze fractures is the fact that lights and darks of shadows are reversed from those made by light. For this reason some people initially find it easier to interpret their micrographs from the photographic negative rather than the positive image. A second problem is the fact that when a replica is placed into the TEM there is virtually no way to know before hand the angle of shadow (direction from which metal was deposited). After cleaning, and picking up tiny replica fragments on grids and then placing them into the TEM nearly any orientation is possible. Two things can help to orient the viewer of a freeze fracture replica. The first is any known structure that the operator knows to be convex in nature. IMPs are an excellent example of these. Using the shadow produced by the convex structure the direction of shadow can be determined and the micrograph oriented so that convex structures appear convex and concave ones appear concave. A strategically convenient piece of dirt that fell on the surface of the sample immediately before the replica was made can also fill this function.
One problem that arose when freeze fractures began to be widely used by electron microscopists was that of terminology. Before freeze fracture a biological membrane could be thought of as a single sheet with two (hydrophilic) surfaces. Now suddenly scientists had four different surfaces to deal with and a way was needed to clearly distinguish between them. A paper by [?] created the guidelines by which all other freeze fracture images would be labeled. The first rule that was suggested is that the membrane be broken down into surface (hydrophilic) and fracture (hydrophobic) profiles. These were abbreviated as the "S" and "F" designations. The other way distinguishing which surface is being discussed is to determine whether the half of the membrane in question was in contact with the protoplasmic (P) portion of the cell or the endoplasmic (E) portion. Thus any given biological membrane can be spoken in terms of four surfaces or "faces"; going from the outside of the cell towards the cytoplasm the plasmamembrane would be designated as having a ES face, a EF face, a PF face and a PS face. This designation system becomes tricky when one begins talking about double membrane bound systems (nuclear envelope, mitochondrion, chloroplasts) but none the less is clear and unambiguous. Double replica formation is especially useful in this case for both the EF and PF faces of a given membrane can be viewed and the relative abundance of IMPs on each can be determined.
Immunoelectron microscopy as defined here has a broader definition than strictly antibody-antigen reaction. Under the broad definition it includes the labeling of biochemicals so that their localization can be visualized in the TEM. In order to visualize this in the TEM we must in some way tag or label the biochemical of interest with an electron dense marker that distinguishes it from other cellular components. Some techniques that come under this category are lectin-horseradish peroxidase reaction, biotin-avidin conjugates, as well as antibody-antigen reactions.
An immuno response is one in which an organism exposed to a foreign body develops a resistance to that type of body so that it is resistant or "immune" to infection from future exposure to a similar type of body. Any substance capable of eliciting an immune response is referred to as an antigen.
There are two broad classes of immune responses: 1) Humoral antibody responses involve the production of a antibodies which circulate in the bloodstream and bind specifically to the foreign antigen that induced them and 2) Cell-mediated immune responses which involve the production of specialized cells that react mainly with foreign antigens on the surface of host cells. In immunoelectron microscopy we are primarily concerned with humoral responses that produce soluble antibodies.
Antibodies are produced by a class of cells known as B lymphocytes. The only known function of B lymphocytes is in fact to make antibodies. Antibodies are a unique group of proteins that can exist in millions of different forms each with their own unique binding site for antigen. Collectively they are call immunoglobulins (abbrv. Ig). Most antibodies are bivalent, that is they have two identical antigen binding sites. The antigen binding site is composed of a heavy and a light chain each containing about 220 amino acids. They are hinged by way of their heavy chains to an Fc (Fc stands for Fragment Crystallization).
[Figure 17-17 here]
There are five different classes of antibodies; IgA, IgD, IgE, IgG, & IgM. They differ from one another in the composition of their heavy chains. IgG antibodies constitute the major class of immunoglobulin in the blood and are copiously produced during secondary immune responses. It should be remembered that when using monoclonal antibodies (single antigenic site vs. Polyclonal = multiple antigenic sites on that antigen) the right portion of the antigen must be presented to the surface of the section in order for the antibody to recognize it and bind to it.
Immunogold labeling can be done in one of several ways. The colloidal gold particles (5- 40 nm) are conjugated either directly to the antibody being used or to an IgG or IgA protein. In an indirect method the sections or tissue is first incubated in the antibody of interest. Next the sample is exposed to a secondary antibody that reacts to the IgG or IgA antibody of the first animal. This secondary antibody is conjugated to a colloidal gold particle which because of its electron density allows one to visualize where in the cell the primary antibody (and by implication the antigen) is localized.
One can even do double labeling experiments if gold particles of two different sizes and different animal IgGs are used. This requires using sections picked up on uncoated grids. A number of other electron dense tags that can be used with antibody labeling as well. Ferritin molecules (the storage protein for iron in mammals) have a diameter of about 10 nm and there iron component imparts their electron opacity. Horseradish peroxidase (HRP) is an enzyme that can be coupled to primary antibody and then allowed to form an electron dense reaction product that is visualized. One alternative to using a secondary antibody involves the use of protein A. Protein A is produced by the bacterium Staphylococcus and can bind to the Fc portion of IgG. Tagged protein A is often better suited for use as a secondary label than is an anti IgG antibody.
There are a number of rules that one must follow in performing immunoelectron microscopy. The first involves the choice of grids. Some of the solutions that the sections will be exposed to may react with the metal of the grid (e.g. copper react with high salt conc. solutions). To avoid unwanted chemical reactions one typically chooses grids made of non-reactive metals. Nickel is a common choice as it is fairly unreactive and relatively cheap. Others prefer solid gold grids as these are the most inert. Coated or uncoated grids may be used but sections should not be carbon coated after they are picked up as this can make the sections hydrophobic.
A second rule that should be followed is to avoid overfixation. This is often a difficult thing to balance as we want to retain as much structural preservation as possible while at the same time retain biological activity of molecules. These are mutually incompatible goals. Excessive crosslinking with glutaraldehyde can prevent the reactive sites of a molecule from retaining its shape and therefore function and fixation with osmium can render membranes impermeable and make membrane bound biomolecules inaccessible. Sometimes osmium can be used as fixative after the antibody labeling has been carried out, but this can only be done in cases where the specimen is labeled prior to embedding. A typical fixative for immunocytochemistry studies would be a mixture of 4.0% paraformaldehyde and 0.1% glutaraldehyde in the proper buffer. This will provide reasonable ultrastructural preservation while preventing excessive cross linking. Often sections on grids are initially soaked on a drop of saturated sodium metaperiodate. This reacts with any unbound or unreacted glutaraldehyde in the sections and prevents the glutaraldehyde from crosslinking the antibodies when they are applied to the sections. Freeze substituted specimens must of course be rehydrated if pre-embedding labeling is to be done otherwise this method is an excellent fixation choice (assuming that fixatives have been left out of the substitution fluid. Of course unfixed material such as found with cryosectioning offers the best cross reactivity but structural preservation and image contrast is often very poor. They offer the advantage of never having been fixed, retaining water soluble components, and having not embedding medium to penetrate. Sometimes sections are "etched" to make the antigens contained within it more accessible. A unique application of this involves polystyrene embedding and acetone etching. Prolonged exposure can remove all of the embedding resin leaving only the specimen after sectioning. This is similar to xylene extraction of paraffin sections.
Another type of immunolableing involves the use of Avidin. Avidins are a class of basic glycoproteins that have a MW of about 65,000 and can be found in large amounts in egg white or Streptomyces. They are useful in immuno EM because of their high affinity binding for biotin. Each avidin molecule has four biotin-binding sites per molecule. Many biomolecules can be labeled with biotin (biotinylated) including proteins, lectins, fluorescent beads, and nucleic acid bases. When one treats a sample with gold or ferritin conjugated avidin it selectively binds to the biotinylated molecule and the metal atoms acts an electron dense marker of where the biomolecule of interest is localized.
Enzyme Cytochemistry: [text 254-261]
In addition to the anitbody/antigen type of reactions there are other biochemical reactions that can be utilized to visualize the localization of biological compounds in the TEM. One of these is the very specific reactions that can take place between certain enzymes and their substrates. The reactions can be utlilized to localize the presence of a given enzyme in a specimen. The technique works by trapping the resultant reaction product between the enzyme and the substrate and visualizing it.
As with immunoEM the initial fixation of the specimen must be sufficient to preserve structure while at the same time no degrading the enzyme's ability to react with substrate. A fixation similar to the ones used in immunoEM is often employed. Because enzymatic reactions are sensitive to environmental conditions such a pH, temperature, and substrate concentrations all of these need to be taken into account. Finally, unlike gold particles the reaction product may be only weakly electron opaque therefore at least some of the sections are usually viewed prior to post staining. One interesting note is that enzymatic labeling is often best accomplished using epoxide resins rather than methacrylates. It is believed that the hydrophilic nature of methacrylates allows the enzyme to easily access the substrate, carry out the reaction, and then detach. Since we want the enzyme to remain attached to the substrate (thus showing the localization of the substrate) it is actually better to use resins that are more difficult to penetrate and therefore more difficult for the enzyme to release from.
The reaction between horseradish peroxidase (HRP) which is an enzyme that reacts with peroxide and through the addition of DAB and oxidation with OsO4 forms and insoluable electron dense precipitate. Sometimes HRP is coupled to an antibody and then a reaction product formed through the addition of the proper components to form an insoluable precipitate. The earliest use of this involved ferritin- HRP complex but this may have a reduced access to the lectin binding sites due to steric hindrance. Recently HRP has been electrostatically bound to colloidal gold and thus used as an indirect marker for lectin binding sites. This avoids the steric hindrance problem and gives a better indication of lectin binding site distribution.
Alternative Methods [280-285]
Lectins are plant compounds that have specific affinities for certain carbohydrates. They may be tagged and used as a probe for the presence of these oligosaccharides.
Naturally occurring compounds:
Molecules that normally bind or react with one another can be utilized
One final type of biochemical localization involves the use of Diaminobenzidine (DAB). DAB specifically binds to sulfated mucopolysaccharides when exposed to them at low pH. The DAB can subsequently be oxidized by exposure to Osmium tetroxide. The resulting electron dense precipitate is then an indication of where the sulfated polysaccharides are localized. A second rather use of DAB takes advantage of the fact that DAB can become oxidised by U.V. irradiation. If a sample is first made fluorescent by either labeling with a fluorescent dye or conjugated molecule, then bathed in DAB and finally exposed to the wavelength of light that will excite the fluorochrome, the energy absorbed will oxidise the DAB which in turn will form an insoluable, electron dense precipitate. This precipitate will therefore be colocalized with the fluorescent marker. This reaction also takes place with autofluorescent compounds that are naturally found in cells therefore making the cytochromes of mitochondria and the chlorophylls of chloroplasts sites where DAB precipitation will take place. The technique has the advantage of allowing fluorescent and EM studies to be done on the same sample and is an excellent way of positively identifying the biological structure that was originally labeled.
Sections have thickness to them and are not really flat. Things generally bind only to exposed molecules. Size of probe and porosity of the embedding medium are two factors that influence immunolabeling. For this reason hydrophilic methacrylate resins such as LR White and Lowicryl are often used in immunoelectron and cytochemical microscopic studies and epoxy resins generally avoided. This is not to say that epoxy resins cannot be used, only that if labeling is poor the choice of resin should be re-evaluated. Labeled sections are usually post stained after immunolabeling with uranyl acetate or lead citrate to provide contrast to the sample.
Stereology [text 288-303]
We have seen how the TEM can be used for descriptive work, and to learn information and about the spatial distribution of biomolecules. A third type of information that is available to us using the TEM is quantitative information about the specimen and geometric or three dimensional information. Geometric information is known as stereology and image quantification is known as morphometry. The techniques used in both of these applications is based on the assumption that relative to the total size and volume of the original specimen any given section can be considered as a two dimensional view. Some of the parameters that can be measured using stereological approaches include the quantification of area, volume, surface area, length, and number of structures present.
One basic approach to stereology involves the use of grid patterns or "test systems" that are used to overlie the micrograph. By carefully recording the number of interactions between structures in the micrograph and plots on test system one can come up with an objective value rather than a subjective "guess" for the number of interactions. Using these values one can plug into various equations and derive values for the parameter of interest.
For example if the percent area of a given object relative to the surrounding structure is desired one can count the number of interactions or "hits" that are scored when the structures in questions coincide with the intersection of the grid squares or even lines. If one then compares the number of intersections with the structure (Ns) and the total (Nt) and compares
Ns/Nt one can obtain a ratio and multiply by 100 to obtain a value for percent area of the structure. The fineness of the grid system will largely determine the accuraccy of this estimate as well as the time required to do the counting.
If one takes repeated area measurements from adjacent serial sections a volume can be calculated for the object. Two values are needed to do this. The first is of course accurate area measurements for each section. We have already seen how these values can be approximated. The other is section thicknes. Since we are only viewing a two dimensional view of a three dimensional object (the section DOES have a hieght or "z" component to it) we need to know this to plug into the equation:
V = Areas X avg. section thickness
The reflectance color of a section is a good way of estimating the thickness of a section (much better than relying on the microtome settings) but can vary 10%-20% and also varies with the type of resin used. A more accurate method involves re-embedding the section and cutting it transversely and then viewing and measuring it in the TEM. This of course destroys the original section. Other methods such as section tilting and carefully measuring the relative distance changes for objects in the plane of tilt can also be used but are somewhat cumbersome. A simple and farily accurate method takes advantage of sharp folds in the section and the esitmate that the total section thickness is one half the fold width. The most accurate method involves using plastic beads of a known diameter (determined from high resolution SEM or negative stained TEM) and secitoning these along with the specimen. By counting the total number of sections needed to go through the sphere and dividing the sphere's diameter by this number the average section thickness can be determined.
To a large extent computer analysis of electron micrographs has largely replaced many of these classic stereologic techniques. The first generation of these took advantage of a digitizing pallet linked to a computer with specialized software. By tracing the image from either negatives or prints the essential information is entered into the computer for the software to use. Such variables as length of convoluted lines or areas of irregular objects are easily calculated by the computer. These values can be incorporated into a spreadsheet program and quantified. Such hand enetered data sets can also be used in three dimensional reconstruction applications to visualize structures that span several planes and calculate values for volume measurements.
More sophisticated software can now digitize the images without the use of hand tracings. Images are either digitized by placing the micrograph under a video camera or directly through the use of a CCD mounted beneath the camera in the TEM column. The operator then highlights the structures of interest (usually by discriminating on grey levels) and then the computer will automatically calculate unit measurements. This can be extremely useful when one is attempting to perform particle counting or other such applications.
X-ray Microanalysis [text 332-344]
Another class of signals produced by the interaction of the primary electron beam with the specimen come under the category of characteristic X- rays. When an electron from an inner atomic shell is displaced by colliding with a primary electron, it leaves a vacancy in that electron shell. In order to re-establish the proper balance in its orbitals following an ionization event, an electron from an outer shell of the atom may "fall" into the inner shell and replace the spot vacated by the displaced electron. In doing so the this falling electron loses energy and this energy is referred to as X- radiation or X-rays.
In addition to characteristic X-rays, other X-rays are produced as a primary electron decelerates in response to the Columbic field of an atom. This "braking radiation" or Bremsstrahlung X-ray is not specific for the element that causes it and so these X-rays do not contribute useful information about the sample and in fact contribute to the background X-ray signal.
For each electron/specimen interaction there are specific electron replacement events that can take place. We speak of these events as either K, L, or M replacement events depending on which orbital shell lost the electron.
We can further dissect these electron replacement events by speaking of them in terms of which outer orbital electrons served as the replacement for the displaced electron. If the replacement electron came from an adjacent orbital shell it is an alpha event, if it came from two shells away it is a beta event, if the electron was donated from three shells away it is a gamma event. Within a given shell there may be several different orbitals, any of which could donate the replacement electron. Thus we could speak of a K alpha1 or a K alpha2 replacement. The important thing to note is that each electron replacement event for each element gives off a specific amount of energy as the replacement electron goes from a higher energy state to a lower energy state. This change in energy is released in the form of x-rays and because of the specific nature of these x-rays they are call "characteristic x-rays."
By using special detectors that can discriminate between the different characteristic x-rays one can obtain information about the elemental composition of the specimen. Let's assume that a plant cell is found to have in thin sections an electron dense inclusion of unknown composition. By bombarding the inclusion with electrons from the beam we drive off a number of electrons which are replaced by outer orbital electrons and give off characteristic x-rays for the elements in the specimen. Since we are primarily interested in the composition of the inclusion and not the surrounding tissue it is very beneficial to be able to focus the beam to a single spot and position this over the object on interest. This can best be done in a Scanning Transmission Electron Microscope or STEM. A STEM is equipped with a set of scan coils and can function in much the same way as an SEM by rastering the beam (reduced to a small spot) over the specimen which in this case would be a section on a grid. Also because more of a sample will contain more of the material in question we tend to cut thicker sections for x-ray microanalysis than we would for straight visualization. Sections of 100-250 nm are typically used. Finally, because certain elements produce their own characteristic x-rays that may interfere with or obscure the ones of the unknown sample, we tend to avoid osmicating the specimen and avoid UA and lead staining. Also the choice of metal grid can be important as grids composed of one metal (e.g. nickel) may not overlap whereas others (e.g. copper) may.
By collecting the x-ray signals produced over an extended period of time (e.g. 100 seconds) certain electron replacement events will occur more frequently than others. We collect for 100 seconds of longer so that the more frequent events will reinforce each other and thereby become distinct from the background (characteristic x-rays from other elements) and continuum x-rays. These repeated energy spectra manifest themselves in the form of distinct peaks. Next with the aid of a computer we can assign a numerical value for the midpoint of each peak and scan through the values from known samples to find the most logical match to our observed spectra. In trying to assign a match it is important to note that for a given element there will be more K alpha events than K beta events, and more K beta events than K gamma events. Thus if one suspects the presence of a given element due to the match between a collected peak and the K beta peak of a known element, there should be a corresponding K alpha peak for that element that is larger than the suspected K beta peak.
There are basically two types of x-ray detectors available for TEM and SEM. These are Energy Dispersive X-ray (EDX) detectors and Wavelength Dispersive X- ray (WDS) detectors. They function in quite different ways.
EDX detectors are the most versatile, cost effective and hence most widely used type of x-ray detectors. EDX detectors are composed of a silicon semi- conductor that has been doped with lithium and are therefore referred to as SiLi detectors. The EDX detector works by measuring the change in conductivity that occurs when the semi-conductor absorbs excess energy in the form of x-radiation. The conductivity increase is directly proportional to the magnitude of the x- rays and so by carefully measuring this increase one can knows the level of x- radiation that was absorbed by the detector. Since these changes are still relatively small the detector is kept a liquid nitrogen temperatures to reduce electronic noise that would degrade peak resolution. Since the internal environment of the TEM is subject to minor (introduction of specimen) to major (venting of the column) changes the detector must be kept in an exceptionally clean environment. To do this it is typically shielded behind a very thin seal or window. Beryllium is the material of choice for such a window. Because of its low atomic weight (4) beryllium will not block the x-rays of higher energy that are produced by elements of higher atomic weight. It will however reduce the ability to detect x-rays of relatively low energy (such as those given off by elements of low atomic weight) and make the detection of "light" elements more problematic. To avoid this some systems have gone over to a windowless detector which depends on the purity of the TEM environment from contaminating the cold EDX detector.
The second type of x-ray detector is base on WDS. In WDS crystals of known composition and structure are placed on a movable turret relative to the x-ray source and a simple detector (alternatively the detector itself is movable relative to the crystal. Electrons and x-rays will move through a crystal and be reflected or diffracted based on the particular arrangement of molecules in that crystal. Only those energy sources entering from a specific angle relative to the matrix arrangement of the crystal will be so deflected. The angle by which this takes place is known as the "Bragg angle" and is dependent to some extent on the energy of the incoming radiation.
where = an integer (1, 2, 3, etc.)
= the x-ray wavelength
= the interplanar spacing of the crystal
= angle of incidence
[Fig 5.2 Goldstein]
The crystal is often polished to a curved surface so that the collected x- rays can be focused onto the actual detector. The detector is a thin wire kept at a high positive voltage in an argon/methane environment. As the x-rays pass through a thin plastic window they ionize the gas mixture and conduct electrons to the wire. This current flow is carefully measured and is proportional to the energy of the x-ray which in turn reveals information about the source of the x- rays.
WDX is more quantitative than EDX but has a number of disadvantages to it. The most important of these is the fact that each type of crystal has a relatively narrow x-ray energy range that it can deflect. Thus each crystal and detector can only detect a small range of elements whereas an EDX detector can detect nearly the entire spectrum of elements. Because of this one often needs a suite of WDX detectors, each with a different type of crystal and responsible for a different portion of the periodic table. This means having a number of open ports available near the specimen. On most TEMs we do not have this luxury and so WDX systems are usually found on a special class of SEM known as a microprobe. X-ray analysis on TEM and STEM is usually accomplished with an EDX system.
Electron Diffraction [text 347-355]
We have seen several methods whereby we can learn more about the specimen than just its appearance. Although not widely used by biologists electron diffraction is a powerful TEM technique that can provide important information about the molecular arrangement of crystalline specimens.
Electrons are forward scattered or "diffracted" as they come in contact with molecules in the specimen. Most of the time this results in a random deflection of the illuminating electrons and creates a fuzzy or muddled quality to the final image. This is caused by the fact that a deflected electron will create just as bright a spot on the fluorescent screen or TEM film as will an undeflected electron. Because a randomly scattered electron may hit the screen in a region that would normally appear dark due to the presence of an electron dense body immediately above it. To reduce the effect of these randomly scattered electrons one typically places a small diameter aperture in the objective lens immediately beneath the specimen. Although this reduces overall illumination and reduces resolution by decreasing the angle of the cone of incident illumination, it increases image contrast by eliminating most of the forward scattered electrons.
The situation is quite different when the electrons of the beam encounter a crystalline specimen. A crystalline specimen is one in which the molecules of the specimen are arranged in such a way as to form a close-packed lattice array with individual molecules arranged in a very ordered and repetitive structure. If the electrons strike a crystalline structure at the proper angle they will all be diffracted from the individual planes of the lattice in the same angle and same direction and brought to the same focal point. This focal point lies in the same plane as the one in which transmitted electrons come to focus and is known as the back focal plane of the objective lens.
Electron Diffraction Cont'd
The angle at which the incident electrons encounter the specimen is the most critical parameter in creating a sharp electron diffraction pattern. This angle is known as the Bragg Angle. A crystalline specimen that is placed on grid may intially lie at any random angle relative to the incident beam. To orient the specimen so that the incident beam strikes at the proper Bragg angle and generates a sharp diffraction pattern it is necessary to tilt and rotate the specimen until a clear pattern is formed. If the beam strikes the lattice at the proper Bragg angle electrons that are scattered from the same point in the specimen are brought together at a single point in the image plane. Likewise electrons scattered from different points in the specimen BUT deflected in the same direction and angle will converge in the back focal plane of the objective lens.
If one can obtain a picture of this pattern and carefully measure the spacings between these spots of convergence much can be learned about the molecular structure and composition of the specimen.
The spacing between lattice planes can be calculated from the diffraction pattern using the following equation:
Where d = Spacing between planes
= Wavelength of electron (based accelerating voltage)
L = Camera length (distance in mm between specimen and camera)
R = Distance from center spot to bright dots on negative
It is important that these calculations be done on the negative itself. If done on prints the exact enlargement factor must be known so that the R measurements can divided by this number. Since the camera length is a critical portion of this equation it should be regularly calibrated. This is done not by measuring with a ruler but by creating a diffraction pattern with a standard sample of known d spacing at a given accelerating voltage and calculating the value for L by plugging R into the equation.
Intermediate And High Voltage EM [text 360-367]
Theoretical resolution in a transmission optical instrument can never exceed 1/2 the wavelength of the illumination. de Broglie's equation for calculating the wavelength of an excited electron is
Where = wavelength
h = Planck's constant (6.626 X 10-23 ergs/sec)
m = mass of electron
v = velocity of electron
By plugging in known values this equation can be reduced to
= (1.23/ V )nm Where V = Accelerating Voltage
In theory then the higher the accelerating voltage, the shorter the wavelength, the greater the resolution capability! If 100 kV is good 1000 kV (one million volts or 1 MV) is better! This type of accelerating voltage is known as High Voltage Electron Microscopy or HVEM.
A second advantage of HVEM is the ability of the beam to penetrate a specimen. Even a 125 kV TEM cannot penetrate a very thick specimen and most of our knowledge of the three dimensional nature of biological structures is from reconstructions made of serial thin sections each of which was laboriously sectioned, photographed, and pieced back together. Electrons that are accelerated to only 125 kV are more widely scattered than those of a 1 MV TEM. Because of this thicker specimens can be used than are possible with a conventional TEM. By viewing a greater portion of the specimen at a single time stereo pair images can be formed of a single thick section and viewed to gain three dimensional information about the specimen. This is best done on specimens that contain no embedding resin which would only serve to scatter the electrons.
A final advantage of HVEM is that because fewer of the beam electrons interact with the specimen (they are moving by too fast) specimen damage tends to be less in a HVEM than in a conventional TEM (assuming the same thickness sections). Of course, since one of the primary reasons for using a HVEM is to look at thick sections, this advantage is often canceled out by the increased number of interactions with the specimen.
One major drawback to HVEM is $. Although the optical systems are essentially the same as those found on conventional TEMs the components associated with the 1 MV accelerating system usually means that HVEMs are several stories tall and require a special building dedicated to their use. There are only a handful of active HVEMs in the U.S. and less than 30 world wide. Most of these were built in the 1950's or 1960's. In an effort to gain some of the advantages without all of the expense a number of TEM manufacturers have introduced Intermediate Voltage Electron Microscopes (IVEMs). Although generally costing more than a conventional TEM, IVEMs can be housed in the same places as conventional TEMs and can also be operated at lower (80 - 100 kV) accelerating voltages. IVEMs and HVEMs are most popular among materials scientists use images of lattice images are often only possible at very high accelerating voltages.
One of the major discoveries of cellular structure was found through the use of HVEM, this being the complex microtrabecular lattice that is believed to extend throughout the cytoplasm of cells. There are many however who believe that this lattice is an artifact of dehydration and specimens are properly critical point dried no such lattice exists!