Next: Observing principles and tools Up: AY535 class notes Previous: Uncertainties and error analysis

Subsections

# Noise equation: how do we predict expected uncertainties?

(Entire section in one PDF file).

## Signal-to-noise

Astronomers often describe uncertainties in terms of the fractional error, e.g. the amplitude of the uncertainty divided by the amplitude of the quantity being measured; often, the inverse of this, referred to as the signal-to-noise ratio is used. Given an estimate the number of photons expected from an object in an observation, we can calulate the signal-to-noise ratio:

=

which is the inverse of the predicted fractional error (N/S).

Consider an object with observed photon flux (per unit area and time, e.g. from the signal equation above), S, leading to a signal, S = STt where T is the telescope area and t is the exposure time. In the simplest case, the only noise source is Poisson statistics from the source, in which case:

σ2 = S = STt

= =

In other words, the S/N increases as the square root of the object brightness, telescope area, efficiency, or exposure time. Note that S is directly observable, so one can calculate the S/N for an observation without knowing the telescope area or exposure time! We've just broken S down so that you can specifically see the dependence on telescope area and/or exposure time.

### Background noise

A more realistic case includes the noise contributed from Poisson statistics of background'' light (more on the physical nature of this later), B, which has units of flux per area on the sky (i.e. a surface brightness); note that this is also usually given in magnitudes.

B = qλ

The amount of background in our measurement depends on how we choose to make the measurement (how much sky area we observe). Say we just use an aperture with area, A, so the total observed background counts is

AB = ABTt

Again, BTt is the directly observable quantity, but we split it into the quantities on which it depends to understand what factors are important in determining S/N.

The total number of photons observed, O, is

O = S + AB = (S + AB)Tt

The variance of the total observed counts, from Poisson statistics, is:

σO2 = O = S + AB = (S + AB)Tt

To get the desired signal from the object only , we will need to measure separately the total signal and the background signal to estimate:

SSTt = O - A < B >

where < B > is some estimate we have obtained of the background surface brightness. The noise in the measurement is

σS2 σO2 = S + AB = (S + AB)Tt

where the approximation is accurate if the background is determined to high accuracy, which one can do if one measures the background over a large area, thus getting a large number of background counts (with correspondingly small fractional error in the measurement).

This leads to a common form of the noise equation:

=

Breaking out the dependence on exposure time and telescope area, this is:

=

In the signal-limited case, S > > BA, we get

= =

In the background limited case, BA > > S, and

= =

As one goes to fainter objects, the S/N drops, and it drops faster when you're background limited. This illustrates the importance of dark-sky sites, and also the importance of good image quality.

Consider two telescopes of collecting area, T1 and T2. If we observe for the same exposure time on each and want to know how much fainter we can see with the larger telescope at a given S/N, we find:

S2 = S1

for the signal-limited case, but

S2 = S1

for the background-limited case.

### Instrumental noise

In addition to the uncertainties from Poisson statistics (statistical noise), there may be additional terms from instrumental uncertainties. A common example of this that is applicable for CCD detectors is readout noise, which is additive noise (with zero mean!) that comes from the detector and is independent of signal level. For a detector whose readout noise is characterized by σrn,

=

if a measurement is made in a single pixel. If an object is spread over Npix pixels, then

=

For large σrn, the behavior is the same as the background limited case. This makes it clear that if you have readout noise, image quality (and/or proper optics to keep an object from covering too many pixels) is important for maximizing S/N. It is also clear that it is critical to have minimum read-noise for low background applications (e.g., spectroscopy).

There are other possible additional terms in the noise equation, arising from things like dark current, digitization noise, uncertainties in sky determination, uncertainties from photometric technique, etc. (we'll discuss some of these later on), but in most applications, the three sources discussed so far – signal noise, background noise, and readout noise – are the dominant noise sources.

Note the applications where one is likely to be signal dominated, background dominated, and readout noise dominated.

## Error propagation

Why are the three uncertainty terms in the noise equation added in quadrature? The measured quantity (S) is a sum of S + B - < B > + < R >, where < R > = 0 since readout noise has zero mean. The uncertainty in a summed series is computed by addding the individual uncertainties in quadature; in the equation above, we have neglected the uncertainty in < B >. To understand why they add in quadrature, let's consider general error propagation.

More reasons to consider error propagation: let's say we want to make some calculations (e.g., calibration, unit conversion, averaging, conversion to magnitudes, calculation of colors, etc.) using these observations: we need to be able to estimate the uncertainties in the calculated quantities that depend on our measured quantities.

Consider what happens if you have several known quantities with known error distributions and you combine these into some new quantity: we wish to know what the uncertainty is in the new quantity.

x = f (u, v,....)

The question is what is σx if we know σu, σv, etc.?

As long as uncertainties are small:

xi - < x > ∼(ui - < u > ) + (vi - < v > ) + ...

σx2 = lim(N - > ∞)(xi - < x > )2

= σu2 + σv2 +2σuv2 + ...

The last term is the covariance, which relates to whether uncertainties are correlated.

σuv2 = lim(n - > ∞)(ui - < u > )(vi - < v > )

If uncertainties are uncorrelated, then σuv = 0 because there's equal chance of getting opposite signs on vi for any given ui. When working out uncertainties, make sure to consider whether there are correlated errors! If there are, you may be able to reformulate quantities so that they have independent errors: this can be very useful!

Examples for uncorrelated errors:

x = u + v

σx2 = σu2 + σv2

• multiplication ( or division, if you formulate the division problem as a multiplication!):

x = uv

σx2 = v2σu2 + u2σv2

• natural logs:

x = ln u

σx2 = σu2/u2

For base-10 logs, note that log x = log e ln x

Note that when dealing with logarithmic quantities, uncertainties in the log correspond to fractional uncertainties in the raw quantity.

### Distribution of resultant uncertainties

When propagating errors, even though you can calculate the variances in the new variables, the distribution of uncertainties in the new variables is not, in general, the same as the distribution of uncertainties in the original variables, e.g. if uncertainties in individual variables are normally distributed, uncertainties in output variable are not necessarily.

When two variables are added, however, the output is normally distributed.

### Determining sample parameters: averaging measurements

We've covered uncertainties in single measurements. Next we turn to averaging measurements. Say we have multiple observations and want the best estimate of the mean and variance of the population, e.g. multiple measurements of stellar brightness. Here we'll define the best estimate of the mean as the value which maximizes the likelihood that our estimate equals the true parent population mean.

For equal uncertainties, this estimate just gives our normal expression for the sample mean:

=

Using error propagation, the estimate of the uncertainty in the sample mean is given by:

σ2 = =

But what if uncertainties on each observation aren't equal, say for example we have observations made with several different exposure times? Then the optimal determination of the mean is using a:

weightedmean =

and the estimated uncertainty in this value is given by:

σμ2 = =

where the σi's are individual weights/errors (people often talk about the weight of an observation, i.e. : large weight means small uncertainty.

This is a standard result for determining sample means from a set of observations with different weights.

However, there can sometimes be a subtlety in applying this formula, which has to do with the question: how do we go about choosing the weights/errors, σi? We know we can estimate σ using Poisson statistics for a given count rate, but remember that this is a sample variance (which may be based on a single observation!) not the true population variance. This can lead to biases.

Consider observations of a star made on three nights, with measurements of 40, 50, and 60 photons. It's clear that the mean observation is 50 photons. However, beware of the being trapped by your undertainty estimates. From each observation alone, you would estimate uncertainties of , , and . If you plug these uncertainty estimates into a computation of the weighted mean, you'll get a mean rate of 48.64!

Using the individual estimates of the variances, we'll bias values to lower rates, since these will have lower estimated variances.

Note that it's pretty obvious from this example that you should just weight all observations equally. However, note that this certainly isn't always the right thing to do. For example, consider the situation in which you have three exposures of different exposure times, and you are calculating the photon rate (counts/s). Here you probably want to give the longer exposures higher weight (at least, if they are signal or background limited). In this case, you again don't want to use the individual uncertainty estimates or you'll introduce a bias. There is a simple solution here also: just weight the observations by the exposure time. However, while this works fine for Poisson uncertainties (variances proportional to count rate), it isn't strictly correct if there are instrumental uncertainties as well which don't scale with exposure time. For example, the presence of readout noise can have this effect; if all exposures are readout noise dominated, then one would want to weight them equally, if readout noise dominates the shorter but not the longer exposures, once would want to weight the longer exposures even higher than expected for the exposure time ratios! The only way to properly average measurements in this case is to estimate a sample mean, then use this value scaled to the appropriate exposure times as the input for the Poisson uncertainties. See the next subsection for a more comprehensive discussion.

Another subtlety: averaging counts and converting to magnitudes is not the same as averaging magnitudes!

### Weighted mean of a series of exposures

Say you have a bunch of measurements of the brightness of an object. Under the assumption that the object is not variable, you wish to get the best estimate of the count rate of the object. Your measurements are not all at the same exposure time. So you have a bunch of measurements:

ci = rti

where each of the ci has noise associated with it.

For each measurement, you can calculate a rate:

ri

Now you wish to average all of the measured rates to get the best estimate of the true rate. Because the measurements were taken with different exposure times, they have different amount of uncertainty, so you want to use a weighted mean to get the best estimate.

To calculate a weighted mean, you need to determine the variances of the observations. Given the true rate, r, the variances of the measurements are

σ2 = rti + rn2

where n is the number of pixels you've summed over to get the total counts, and σrn is the readout noise.

To get the variances of the rates, ri = , we need to divide the variances of the counts by t2 (i.e. divide the standard deviations by t, so

σ2 = r/ti + rn2/ti2

The problem is that you don't know what r is. Normally, you might just take

σi2 = ri + rn2/ti2

but the problem with this is that you'd be weighting each measurment as if it had a different true rate, when in fact they all have the same true rate; this can lead to biases.

Let's go back to the correct formulation, which gives for the weighted mean:

< r > =

In the case of all equal exposure times, then the σ2 come out of the sums and cancel between the numerator and denominator, leaving

< r > =

i.e., an unweighted mean, just as we'd expect

In the case of zero readout noise we have

< r > =

The r can now come out of the sums and cancel in the numerator and denominator, so the weighted mean is now independent of the true weight and becomes

< r > =

i.e., calculate the mean by weighting by the exposure times.

For the general case of unequal exposure times in the presence of readout noise, you can avoid biases by choosing any reasonable estimate of r for use in calculating the weights. For example, you could use the observed rate with the highest S/N as the basis for calculating weights. Or you could even iterate the solution a couple of times. If you were to do this, you'd find that the weighted mean hardly depends at all on what you choose for the estimate of r so long as it is close to the true r.

### Can you split exposures?

Although from S/N considerations, one can determine the required number of counts you need (exposure time) to do your science, when observing, one must also consider the question of whether this time should be collected in single or in multiple exposures, i.e. how long individual exposures should be. There are several reasons why one might imagine that it is nicer to have a sequence of shorter exposures rather than one single longer exposure (e.g., tracking, monitoring of photometric conditions, cosmic ray rejection, saturation issues), so we need to consider under what circumstances doing this results in poorer S/N.

Consider the object with photon flux S, background surface brightness B, and detector with readout noise σrn. A single short exposure of time t has a variance:

σS2 = STt + BATt + Npixσrn2

If N exposures are summed, the resulting variance will be

σN2 = S2

If a single long exposure of length Nt is taken, we get

σL2 = STNt + BATNt + Npixσrn2.

The ratio of the noises, or the inverse ratio of the S/N (since the total signal measured is the same in both cases), is

=

The only difference is in the readout noise term! In the signal- or background-limited regimes, exposures can be added with no loss of S/N. However, if readout noise is significant, then splitting exposures leads to reduced S/N.

## Random errors vs systematic errors

So far, we've been discussing random errors. There is an additional, usually more troublesome, type of errors known as systematic errors. These don't occur randomly but rather are correlated with some, possibly unknown, variable relating to your observations, and can have the effect of not just adding spread around the true value that you are trying to measure, but actually measuring the wrong mean.

EXAMPLE : flat fielding

EXAMPLE : WFPC2 CTE

Note also that in some cases, systematic errors can masquerade as random errors in your test observations (or be missing altogether if you don't take data in exactly the same way), but actually be systematic in your science observations.

EXAMPLE: flat fielding, subpixel QE variations.

Note that error analysis from expected random errors may be the only clue you get to discovering systematic errors. To discover systematic errors, plot residuals vs. everything!

Next: Observing principles and tools Up: AY535 class notes Previous: Uncertainties and error analysis