(Entire section in one PDF file).
The earth's atmosphere has several different effects: it emits light, it absorbs light, it shifts the apparent direction of incoming light, and it degrades the coherence of incoming light, leading to degradation of image quality when collecting light over a large aperture.
Many of these source are emission line sources, not contiunuum sources, as shown in this plot How bright are these sources?
|(days)||U||B||V||R||I (mag/square arcsec)|
The significance of moonlight, especially in the optical, gives rise to splitting nights into dark, grey, and bright time, where dark time means no moon above horizon, grey time something like less than 50% moon above horizon, otherwise bright time.
For broadband work, for example in the V band, msky∼22 mag/arcsec at good site so we switch over to background limited around m=22 for good image quality, switch over around m=20 for poorer image quality. Consequently, image quality matters for faint objects!! Moonlight is very significant, hence faint optical imaging requires dark time.
Optical spectroscopy: sky emission generally not much of a problem except around lines so long as moon is down (or work on bright objects); low dispersion observations can be background-limited for long exposures, but at higher dispersion or shorter exposures, spectroscopy is often readout-noise limited.
Most of the emission in the near-IR is from emission lines of OH, the so-called ``OH forest.'' For broadband work in H band, m∼13.5 mag/arcsec; in K band, m∼ 12.5 mag/arcsec. So for all except bright objects, we're background limited. This leads to some fundamental differences in data acquisition and analysis between the near-IR and the optical. For infrared spectra, it's harder to estimate S/N : depends on where your feature is located. Moonlight is not very significant, hence much IR work is done in bright time.
Farther in the IR (5μ+), thermal emission from the sky dominates and is extremely bright. In fact, when working at wavelengths with thermal background, the exposure time is often limited by the time it takes to saturate the detector with background (sky) light... in seconds or less!
Sky brightness from most sources varies with time and position in the sky in an irregular fashion. Consqeuently, it's essentially impossible to estimate the sky a priori: sky must be determined from your observations, and if your observations don't distinguish object from sky, you'd better measure sky close by in location and in time: especially critical in the IR. See some IR movies; spectral movie from ESO/Paranal see here.
Earth's atmosphere doesn't transmit 100% of light. Various things contribute to the absorption of light:
All are functions of wavelength, time to some extent, and position in sky.
In the optical part of the spectrum, extinction is a roughly smooth function of wavelength and arises from a combination of ozone, Rayleigh scattering, and aerosols, as shown in this plot. The optical extinction can vary from night to night or season to season, as shown in this plot. Of course, this is showing the variation over a set of photometric nights; if there are clouds, then the level of variation is much higher! Because of this variation, you must determine the amount of extinction on each night separately if you want accuracy better than a few percent (even for photometric nights!). Generally, the shape of the extinction curve as a function of wavelength probably varies less than the amplitude at any given wavelength. Because of this, one commonly uses mean extinction coefficients when doing spectroscopy where one often only cares about relative fluxes. To first order, the extinction from clouds is ``gray'', i.e. not a function of wavelength, so relative fluxes can be obtained even with some clouds present.
There is significant molecular absorption in the far-red part of the optical spectrum, in particular, note the A (7600) and B (6800) atmospheric bands from O2.
In the infrared, the extinction does not vary so smoothly with wavelength because of the effect of molecular absorption. In fact, significant absorption bands define the so-called infrared windows (yJHKLM), as shown in the near IR in this plot. At longer wavelengths, the broad absoprtion band behavior continues, as shown in this plot. In this figure, transmission = f (bλl ) where l is path length (units of airmass):
The L band is at 3.5μ, M band at 5μ.
Note that even within the IR ``windows'', there can still be significant telluric absorption features, e.g. from CO2, H2O, and CH4. When doing IR spectroscopy, one needs to be aware of these and possibly attempt to correct for them, taking care not to confuse them with real stellar features!
Clearly, if the light has to pass through a larger path in the Earth's atmosphere, more light will be scattered/absorbed; hence one expects the least amount of absorption directly overhead (zenith), increasing as one looks down towards the horizon.
Definition of airmass: path length that light takes through atmosphere relative to length at zenith: X≡1 vertically (at z = 0). Given the zenith distance, z, which can be computed from:
How much light is lost going through the atmosphere?
Consider a thin sheet of atmosphere, with incident flux F, and outcoming flux F + dF. Let the thin sheet have opacity κ = Nσ, where N is the number density of absorbers/scatterers, and σ is the cross-section/absorber-scatterer.
If the optical depth through the atmosphere is just proportional to the physical path length (true if same atmospheric structure is sampled in different directions), then
Expressing things in magnitudes, we have:
We can define the extinction coefficient kλ:
Since the amount of absorption from the Earth's atmosphere varies with time, to apply this characterization requires measurement of the extinction coefficient(s) on each given night in which one wants to apply them. The basic principle is straightforward: if one observes a star at a range of airmasses as it moves across the sky, one can determine the extinction coefficient in each filter by fitting (e.g., in a least squares sense) for slope of the relation between airmass and the measured instrumental mangitude, and this gives the extinction coefficient.
In practice, the determination of extinction is often combined with the determination of the zeropoint and the transformation coefficients as discussed earlier. The basic equations are:
where the subscripts refer to measurements in different filters, and where I have ignored second order coefficients. The advantage of combining the equations is that you can use the information about the known magnitudes of the standard stars for the extinction term, so you can combine observations of different standards at different airmasses to derive the extinction coefficient and do not need to observe the same star at multiple airmasses.
The direction of light as it passes through the atmosphere is also changed because of refraction since the index of refraction changes through the atmosphere. The amount of change is characterized by Snell's law:
Let z0 be the true zenith distance, z be the observed zenith distance, zn be the observed zenith distance at layer n in the atmosphere, μ be the index of refraction at the surface, and μn be the index of refraction at layer n. At the top of the atmosphere:
We define astronomical refraction, r, to be the angular amount that the object is displaced by the refraction of the Earth's atmosphere:
In cases where r is small (pretty much always):
A typical value of the index of refraction is μ∼1.00029, which gives R = 60 arcsec (red light).
The direction of refraction is that a star apparently moves towards the zenith. Consequently in most cases, star moves in both RA and DEC:
Note that the expression for r is only accurate for small zenith distances (z < ∼45). At larger z, can't use plane parallel approximation. Observers have empirically found:
Why is it important to understand refraction? Clearly, it's relevant for pointing a telescope, but this is generally always automatically handled in the telescope pointing software. If you're just taking images, then the stars are just a tiny bit moved relative to each other, but who really cares? One key issue today is the use of multiobject spectrographs, where slits or fibers need to be placed on objects to accuracies of a fraction of an arcsec. For small fields, refraction isn't too much of an issue, but for large fields, it can be .... note SDSS plates!
The other extremely important effect of refraction arises because the index of refraction varies with wavelength, so the astronomical refraction also depends on wavelength:
This gives rise to the phenomenon of atmospheric dispersion, or differential refraction. Because of the variation of index of refraction with wavelenth, every object actually appears as a little spectrum with the blue end towards the zenith. The spread in object position is proportional to tan z.
This effect is critical to understand for spectroscopy when using a slit or a fiber, since the location of an object in the sky depends on the wavelength. If you point to the location at one wavelength, you can miss an another wavelength and the relative amount of flux you collect will be a function of wavelength, something you may want to take into account if you're interested in the relative flux (continuum shape) over a broad wavelength range. Note the consequent importance of the relation between the orientation slit orientation and the parallactic angle: a slit aligned with the parallactic angle will not lose light as a function of wavelength, but otherwise it will. However, for a slit at the parallactic angle, be careful about matching up flux at different wavelengths for extended objects!
References: Coulson, ARAA 23,19; Beckers, ARAA 31, 13; Schroeder 16.II.
Generally, a perfect astronomical optical system will make a perfect (diffraction-limited) image for an incoming plane wavefront of light. The Earth's atmosphere is turbulent and variations in the index of refraction cause the plane wavefront from distant objects to be distorted. These cause several astronomical effects:
The time variation scales are several milliseconds and up.
The effect of seeing can be derived from theories of atmospheric turbulence, worked out originally by Kolmogorov, Tatarski, Fried. Here, I'll quote some pertinent results, without derivation.
A turbulent field can be described statistically by a structure function:
Kolmogorov turbulence gives:
Physically, r0 is (roughly) inversely proportional to the image size from seeing:
Seeing is more important than diffraction at shorter wavelengths (and for larger apertures) since r0 scales roughly with wavelength. Diffraction more important at longer wavelengths (and for smaller apertures); the effects of diffraction and seeing cross over in the IR for most astronomical-sized telescopes (∼5 microns for 4m); the crossover falls at a shorter wavelength for smaller telescope or better seeing.
The meat of r0 is in (Cn2dh); as you might expect, this varies from site to site and also in time. At most sites, there seems to be three regimes of ``surface layer" (wind-surface interactions and manmade seeing), ``planetary boundary layer" ( influenced by diurnal heating), and ``free atmosphere" (10 km is tropopause: high wind shears), as seen in this plot. A typical astronomical site has r0∼10 cm at 5000Å.
We also want to consider the coherence of the same turbulence pattern over the sky: this coherence angle is called the isoplanatic angle, and the region over which the turbulence pattern is the same is called the isoplanatic patch. This is relevant to adaptive optics, where we will try to correct for the differences across the telescope aperture; if we do a single correction, how large a field of view will be corrected?
In the infrared r0∼70 cm, H∼5000 m, θ∼9 arcsec, i.e. for free atmosphere. For boundary layer, however, isoplanatic patch is considerably larger (part of motivation for ground-layer AO).
Note however, that the ``isoplanatic patch for image motion" (not wavefront) is ∼0.3D/H. For D = 4 m, H∼5000 m, θkin = 50 arcsec. This is relevant for low-order atmospheric correction, i.e., tip-tilt, where one is doing partial correction of the effect of the atmosphere.
As a final practical discussion of seeing, note that atmospheric turbulence is not directly correlated with the presence of clouds. In fact, the seeing is often better with thin cirrus than when it is clear!
Although the Earth's atmosphere provides a limit on the quality of images that can be obtained, at many observatories, there are other factors that can dominate the image quality budget. These have been recognized over the past several decades to be significant effects.
Dome seeing arises from turbulence pattern around the dome and the interface between inside the dome and outside the dome. Even small temperature differences can lead to significant image degradataion
Mirror seeing arises from turbulence right above the surface of the mirror, which can arise if the mirror temperature differs from that of the air above it.
Wind shake of the telescope can also contribute to image quality.
Poor telescope tracking can also contribute.
Finally, the design, quality, and alignment of the telescope optics can contribute to image quality. In general, however, telescope design is done such that the image degradation from telescope design is significantly smaller than that arising from seeing.
The ``quality'' of an image can be described in many different ways. The overall shape of the distribution of light from a point source is specified by the point spread function. Diffraction gives a basic limit to the quality of the PSF, but seeing, aberrations, or image motion add to structure/broadening of the PSF. For a good ground-based telescope, seeing is generally the dominant component of the PSF. The PSF is intrinsically a 2D function. In the case where image quality is azimuthally symmetric, then this can be represented by a 1D function or some parameterization of a 1D function.
Probably the most common way of describing the seeing is by specifying the full-width-half-maximum (FWHM) of the image, which may be estimated either by direct inspection or by fitting a function (usually a Gaussian); note the correspondence of FWHM to σ of a gaussian: FWHM = 2.354σ. Note that when you observe a PSF on a detector, you're really getting the PSF integrated over pixels: some people call the pixel-integrated PSF the effective PSF. Remember that the FWHM doesn't fully specify a PSF, and one should always consider how applicable the quantity is.
Another way of describing the quality of an image is to specify its modulation transfer function (MTF). The MTF and PSF are a Fourier transform pair, so the MTF gives the power in an image on various spatial scales. Turbulence theory makes a prediction for the MTF from seeing:
A potentially better empirical fitting function is a Moffat function :
Another way of characterizing the PSF is by giving the encircled energy as a function of radius, or at some specified radius. The encircled energy is just the cumulative integral of the PSF. Encircled energy requirements are often used for specifying optical tolerances.
A final way of characterizing the image quality, more commonly used in adaptive optics applications, is the Strehl ratio. The Strehl ratio is the ratio between the peak amplitude of the PSF and the peak amplitude expected in the presence of diffraction only. With normal atmospheric seeing, the Strehl ratio is very low. However, the Strehl ratio is often used when discussing the performance of adaptive optics systems (more later).