We want to look for ourselves at how these data change with time by plotting the amount of light collected versus time. However, the data from the astronomical object are flux calibrated, i.e. they are in flux units (ergs*cm^-2*s^-1*A^-1) and were collected over a range of wavelengths. All we really want is the total amount of light seen in an exposure period for each exposure; we prefer the units to be in counts (number of photons) or count rate than in flux units, mostly because the numbers are easier to work with. Flux units have a physical meaning and are "instrument-independent" and are therefore useful for comparing observations from different instruments. Count rate is "instrument-dependent", i.e. different instruments will give a different count rate for the same object, as if the instruments speak different languages. For this reason, we calibrate the instruments: to make them speak the same language. Anyway, since we are looking at information with the same instrumental set-up over time, we don't need to convert to flux.
It would take a long time to run the calibration software on all the data files to convert the data back to counts or count rate, so I want to know if I can get close enough to total count rate by multiplying the total flux sum by a constant. The constant will be the average sensitivity for the grating used over the wavelength range of the observations because the way to convert data to flux units is to divide by the sensitivity curve. I can get that constant either by calculating it from the calibration file for absolute sensitivity or by using a value out of the instrument handbook.
I want to do a test to see if multiplying by this average sensitivity is good enough, so I am going to re-calibrate one of the files, turning off the step that converts to flux units. Then I will sum all the pixels in each group and compare the results of a few groups with the result of the total flux * average sensitivity. Actually, I will probably perform the equation on all the total flux points and plot both sets of data to see how they compare.
Hmmm, they don't even look close...I better examine my constant more closely. Ah, wait, I see. The observations were taken in the Small Science Aperture (SSA) and the constant I used from the handbook was for the Large Science Aperture (LSA).
Since the handbook does not list a sensitivity value for the SSA, I will have to calculate one. First I will find out the range of wavelengths for the data by generating statistics on the wavelength file, whose extension is '.c0h'. I only need to look at the minimum and maximum of the wavelengths and I only need to do one group because all the groups contain the same wavelength information. 'gstat' is a task in a software package called STSDAS (which runs under yet another software package called IRAF) and it performs statistics calculations, such as mean, median, sum, standard deviation, etc. on each group in an HST data file. The input to gstat is 'z*.c0h': I used a wildcard ('*') in the name of the data file because it is the only file in that directory with a '.c0h' extension. I use the 'fields' parameter to limit the output to minimum and maximum and I use the 'group' parameter to limit the output to the 1st group because I know all the groups will contain the same wavelength information.
cl> gstat z*.c0h group=1 fields="min,max" # Image Statistics for z555010bt.c0h # GROUP MIN MAX 1 1162.06 1448.45
GHRS observations consist of many types of data images. Without going into all the details, let me just say there is raw science data and calibrated science data. Part of the calibration involves assigning wavelengths to the data points of the calibrated data. The wavelengths are kept in a different file than the science from the object being observed. Silly, I know, but that's how someone decided to do it. Additionally, there can be many groups of data in one file. For this observation, the groups can be roughly correlated with time so that if we sum up all the data in each group and then plot the results sequentially, it is like seeing how the data changed with time. There is always a corresponding group in the wavelength file for each group in the science data; as I said, all the wavelength groups are the same, in this case.
So, from the MIN and MAX columns in the above gstat results, I know the range of wavelengths for the data. Next I extract from the calibrated science file (*.c1h) the names of the absolute sensitivity file (abshfile) and the absolute flux wavelength file (nethfile) used to change the count rates to flux. Just like the science data, the wavelengths are kept in a separate file:
cl> hselect z*.c1h abshfile,nethfile yes zref$e5v09370z.r3h zref$e5v0936ez.r4h
'hselect' is another IRAF task whose job it is to select information from header files. The input to the task is 'z*.c1h' and the information I want to select is the value of the keywords called 'abshfile and nethfile'. 'yes' just tells 'hselect' to give me the value of the keywords for the input I specified without any other qualifications on which files to work with.
Out of curiosity, I want to see what the range of wavelengths is for the nethfile, so I use gstat again. Since I don't specify the 'fields' parameter, I get the default and since I don't specify 'groups', I get the information for all the groups:
cl> gstat zref$e5v0936ez.r4h # Image Statistics for zref$e5v0936ez.r4h # GROUP NPIX MEAN MIDPT STDDEV MIN MAX SUM 1 177 1497.92 1497.92 256.198 1057.92 1937.92 265132. 2 177 1497.92 1497.92 256.198 1057.92 1937.92 265132.
There are two groups for this file: one for each of the apertures. I double check the aperture for the science data and find out which group goes with which aperture for the absolute sensitivity file (abshfile):
cl> hedit z*.c1h aperture . z555010bt.c1h,APERTURE = SSA cl> hedit zref$e5v09370z.r3h,zref$e5v09370z.r3h aperture . zref$e5v09370z.r3h,APERTURE = SSA zref$e5v09370z.r3h,APERTURE = LSA
'hedit' is just like 'hselect' only 'hselect' puts the output in table format whereas 'hedit' just lists the information in one column.
Then I list out all the pixels in the nethfile. I need to know which pixel in the abshfile corresponds to the wavelengths of the data, so I redirect ('>') the output of the 'listpix' task into the file /tmp/pix.wave:
cl> listpix zref$e5v0936ez.r4h > /tmp/pix.wave cl> less /tmp/pix.wave
By examining the above file with a paging program called 'less', I find that pixel 22 is close to the minimum wavelength of 1162A (A means Angstroms or 10e-10 meters) and pixel 79 is close to the maximum wavelength of 1448A. Therefore, I want the median value of the sensitivity in the abshfile between (and including) those pixels. I don't really need the statistics for both groups, so I only do the one relating to the SSA:
cl> gstat zref$e5v09370z.r3h[22:79] group=1 field="mean,stddev" # Image Statistics for zref$e5v09370z.r3h[22:79] # GROUP MEAN STDDEV 1 7.1690E12 2.0795E12
When I multiply my total flux values by the mean of the sensitivity (7.2e12) I am very close to getting the same answers as when I re- calibrate the data, turn off the flux correction, and sum the count rates. So now I know I can use my sums of the total flux, multiplied by the mean sensitivity 7.2e12 to approximate the total count rate without having to re-calibrate all the data. Now it is easy for me to plot all of the data over time to see what the trends look like.
And the best part is that by recording this journal for you, I have kept a record for me of what I did to this data :) Recording what you have done is a very important step to follow in scientific work.