Consider an ensemble of measurments taken at a light level L. The noise in this ensemble should be , where G is the gain and rn is the readout noise in electrons, and is the noise in electrons.
If you do this at a lot of different light levels, then you can plot vs , and the slope should give you and the intercept . However, remember that if you compute from the images, this gives in DN, so the slope will give you . This test is also excellent for checking the basic performancer of a detector. Deviations from nonlinearity can also usually be seen on such a plot.
However, in its most straightforward application the test is very time consuming and hard to analyze: you have to take many exposures at each different light level, and then determine a gain and readout noise for each pixel and look at them all. It is much easier just to use the set of all pixels as your ensemble at each light level. However, you can't do this directly, because each pixel may have a different sensitivity and different fixed pattern noise, so you're not measuring a true ensemble. If there is significant variation of sensitivity than you can't use a whole area at all, because the noise properties will vary across the area. You can avoid these problems by working with the difference between pairs of observations: if the light level is the same in the two images, then you'll be left with an image that only has noise.
Specifically, take a pair of images and form the difference. The expected
noise is
You can abbreviate this test if you just want to get an estimate of the
gain and readout noise. First, take a pair of bias frames. These have
light level of zero, so the noise from the difference just gives you
. (Note that you still need to take a pair in case there is
superbias structure). Then take a pair at a high light level; at this
level the readout noise is probably negligible, and you can determine
the gain from