UKIRT
Show document only
UKIRT Home
Contact Info
OMP
UKIRT Staff wiki
Weather
Web Cameras
____________________

Survey Observing
Other Observing
Observing Tool Manual
Schedule
Applying For Time
Get UKIDSS Data
Get non-UKIDSS Data
Data Reduction
Acknowledge Us
Telescope
WFCAM
Cassegrain
Technical Reference
UKIRT Publications
Graphical Weather Server
EAO Safety Manual
UKIRT Waiver Form
TSS Priority Page
Observing with CGS4 and ORAC-DR

Observing with CGS4 and ORACDR

This document gives a guide to the oracdr pipeline for CGS4 data. It will give you a preview of how your data will be reduced and displayed whilst you are at the telescope observing. It might also be useful for anyone running oracdr on CGS4 data offline.

General ORAC-DR documentation

The ORAC team have produced some general ORAC-DR documentation, which overviews the software. The rest of this document gives information specific to using ORAC-DR with CGS4.

Running up the pipeline

Type runup at a shell prompt to get current instructions on starting the pipeline.

Other things to note: If for some reason you need to stop the pipeline at any point, simlpy issue a Ctrl-C in the xterm window where you started it. If the pipeline is part way through a recipe at the time, several Ctrl-Cs might be required, and there will be a short delay before the pipeline exits.

If you suspect that the pipeline did not exit cleanly, issue the oracdr_nuke command to tidy up any crash debris.

To re-start the pipeline, use: oracdr -loop flag -from NUMBER, where NUMBER is the frame that you want your re-started pipeline to start from. You should generally re-start the pipeline from the first observation in a group. If you're not sure where the group starts, the oracom window shows you the current group number, which is equal to the number of the first frame in the group. If you're a group or more ahead of yourself, then see the later section on viewing or re-creating a nightlog file.

Display Overview

You can change what the pipeline display system shows you and how it displays it by editing the disp.dat file, either directly or by using the oracdisp utility.

The default displays should be fine for most people. These are described here. The main reason why you'd want to change anything from the defaults at the moment is that when plotting a spectrum, the display system will autoscale, which you don't want to do if there's a large spike or other anomoly in the data. See "Changing the plotting scales" below for more details.

Generally, two windows are used to show you data as soon as possible after it comes off the array; one GAIA window, which shows you an image containg the data, and a KAPPA view window which shows histograms, extracted spectra and the like.

A similar two windows are used to show you more reduced data to display intermediate results and show you how well you're accumulating your signal to noise ratio etc. A GAIA window shows you the current group file image - ie the sky subtracted frame, or the difference of the main beam and offset pairs aquired so far. A KAPPA window is then used to show you the divided by standard and flux calibrated spectra.

Changing the plotting scales

The pipeline display system is controlled by a text file called disp.dat, which resides in your $ORAC_DATA_OUT directory. You can edit this file manually with your favourite text editor, or you can use the oracdisp utility.

The disp.dat file contains one line for each type of data that is to be displayed, specifying the file type that this refers to, the plotting tool to use and various parameters of the plot - ie axis scales, autoscaling etc. One important thing to remember is that the display system allways works in array co-ordinates, ie. x is wavelength, y is along the slit, and z is the data value.

To turn off autoscaling of your spectral plots:

  • Go either to the xterm where you launched oracdr from, or get a new xterm and run oracdr_cgs4 in it.
  • run oracdisp
  • Select the line for the type of plot you wish to change. This will probably be DBS for the divided by standard spectra, or FC for the flux calibrated spectra. Double-Click on the appropriate line in the lower pane of the oracdisp window. You may have to scroll down using the scrollbar to find the line you want.
  • In the upper portion of the oracdisp window, find the Z: line and click the button next to "Set" to go from autoscale to set mode, and change the zmin and zmax values to the axes range you would like displayed.
  • Click the "Modify" button at the bottom of the top pane of the oracdisp window.
  • Click the "Configure" button at the bottom of the oracdisp window.
The new settings will take effect next time a spectrum of that type is displayed, which if you're still oberving the same group will be when the next pair of observations is complete.

Starting Observing

Array Tests

The first data you should take are the ARRAY_TESTS data. When the first of these frames arrives, two of the aforementioned display windows will be opened to display them.

When the array tests have completed, oracdr will analyse them and print out a message telling you whether the array performance is as expected. The actuall text should end with something like this:

Double correlated readnoise = 40.1 electrons is nominal
Median Dark current =  0.89 is nominal
Modal Dark current =  0.44 is nominal

If it declares any of the values to by HIGH, then you should consult your support scientist, or the CGS4 instrument scientist, for advice. LOW values are beneficial, not a problem!

Flat and Arc

After completing the array tests, you will probably want to take a flat field and an arc spectrum. These serve as a good example to explain what gets displayed.

First of all, a GAIA window opens up showing your flat field frame. If you move the mouse cursor over the image, you can read off the data value at the current mouse location in the box labeled Value towards the top right of the numeric display area. The window should look somewhat like this:

Next a KAPPA window opens. The top left quadrant shows a histogram of the data values in the flat field frame, the bottom left quadrant shows you a histogram of data values in the normalised flat fied, and the bottom right shows you an image of the normalised flat field. It should look somewhat like this:

You should use the GAIA window and the Histrogram display to check that your flat field frame has sufficient counts - should be around 3000 in the brighter areas, and not more than around 4500, where the array responce starts to go non-linear.

If the count rate is not suitable, then go back to the OT, and edit the flat field component - adjust the exposure time, or the black body lamp apperture if necessary.

Next, your Arc spectrum should arrive. Again, the frame will be displayed in the GAIA window and a histogram of data values will be plotted in the top left of the KAPPA window. Again, you should check for a suitable number of counts in the data and adjust the exposure time if necessary. You should also check for a suitable number of lines to wavelength calibrate the spectrum. You could try one of the other arc lamps if you are low on lines. Do these by editing your program in the OT.

The lower right panel in the KAPPA window shows you the arc spectrum with an estimated wavelength scale on it (this is derived from the grating parameters (angle, order etc) reported by the instrument). Use this to identify a few lines and check that your wavelength region has been selected with suitable accuracy.

What actually gets displayed

OK, time for a word on what's actually getting displayed.

First up, the GAIA window. We do simply display "raw" data here, but this isn't necessarily easy to interpret - there may be multiple integrations in your observation ( see the note on exposures, integrations, observations and co-adds), and/or the data might be oversampled by array stepping. The pipeline replaces the display of raw data in the gaia window with the data in a form more easy to interpret as soon as it has been reduced into such. In fact, this goes through several iterations. Usually, you end up with a _wce frame in the gaia window, which is the most reduced form that single frames go to.

Next up, the KAPPA window. the top left quadrant of this allways shows you a histogram of data values. The actual image that this comes from is the same one as what gets displayed in the GAIA window. Note that especially with the echelle, the histrogram can be changed significantly by the flat-fielding step, as echelle flat fields can have significant CVF gradients across them, and thus when normalised, still contain a range of values. Thus, if you want to check whether you're saturating, make sure you check a pre-flat-fielding frame when using the echelle.

The top right of the KAPPA window, shows you the "bgl" file. This is a frame that shows you how background limited your observations were. The colour scale for this image goes from 0 to 2, where 1 is when the Poisson noise from the number of detected photoelectrons equals the readnoise of the array. For maximum sensitivity to faint objects, your exposures should be long enough to make almost all the pixels background limited - ie this frame should be well filled with values greater that 1. Of course, when observing bright targets, this is not possible as to do so whoudl saturate the array on the target.

When you're observing a Flat, the lower half of this window shows you the histogram of pixel values from the normalised flat field frame on the left, and an image of the normalised flat field frame on the right. You shouldn't be able to see large amounts of structure in this image other than the bad pixel mask which will be apparent.

Other than when taking flats, the lower right panel of the KAPPA window shows an extraction of rows 139-141 of the lastest image, with an estimated wavelength scale applied. Thus, this is useful for checking wavelength regions with arc observation. During normal observations of a target on the sky, this will show either a simple extracted spectrum of your target, with no sky subtraction, or a sky spectrum, depending on whether you're observing in the main or offset position at that time. This assumes that you are using the conventional peakup row for your targets (row 140).

The lower right of this KAPPA shows you the y-profile of your sky subtracted group image, once you have observed sufficient data to form the group image. Generally, this is useful to see how well you're detecting your object and to check that you're getting equal flux in all beams if you're nodding (or chopping) along the slit. If you see here that your beams are unequal, you may need to stop and do a 2-row peakup.

Continuing Observations

Once you have completed a pair of main and offset beam observations, the pipeline forms a group file, into which it subtracts the main and offset images. This image is then displayed in a seperate GAIA window, the spectrum extracted from both rows in it is displayed in the top half of a second KAPPA window and the y-profile is displayed on the lower left of the first KAPPA window and mentioned above. Examples:

If this object has been flagged as a standard star (in the OT), then the pipeline will consult the BS catalogue or SIMBAD to determine the spectral type (and hence temperature) and V magnitude of the star (which are displayed in the extracted spectrum title). It will use these to create a black body model of the star (nb. no interpolation over stellar features currently takes place), which will be used to flux calibrate subsequent data.

As soon as you have observed a non standard star tagret in the main and offset beam positions, a group file will be created and displayed and spectra will be extracted and displayed as before. An additional KAPPA window is created to show you the extracted spectrum (_sp) and flux calibrated (fc) spectra. These are obviously all updated as more data is observed and signal to noise ratio is accumulated. Examples follow:

/

Creating and viewing the Nightlog

The nightlog is a software generated text file, summarising the observations over the night. This file is created when the pipeline sees the first data of the night, and a one line entry for each frame is added as it is reduced by the pipeline. The file is in your reduced ($ORAC_DATA_OUT) directory, and has the name UTDATE.nightlog, where UTDATE is obviosuly the current UT date (eg 20000915.nightlog). If you stop and re-start the pipeline, and any frames are processed twice, they get two entries in the night log file.

During the day, after you finish observing, a nightlog file is automatically created in your raw ($ORAC_DATA_IN) directory, containing the whole nights observing. If the file in your reduced area gets into a mess, you can create the one in your raw area showing the observations so far. In a new xterm window, issue the oracdr_cgs4 command to setup the pipeline, then use something like: oracdr -log sf -skip -resume -from 1 NIGHT_LOG to create the nightlog file. A full version containing the whole night will be created here during the day anyway, overwriting any part-night one you leave there.

Nightlog files are best viewed by stretching an Xterm window to be at least 132 columns wide, then simply using more to scroll through the file.

Changing your mind about the recipe

Or - the DR keeps complaining about the flat and exiting

Occasionally, you will find that for some reason, the recipe that you specified whilst you were using the OT is nolonger appropriate, but you've allready started taking data with that recipe. This will usually be brought to your attention by the fact that oracdr will complain that it's unable to reduce the data and will exit. Normally, this is due to the absence of a pre-requisite for the recipe that you have specified. Most of the recipes use a flat field, and will fail if there is not one availiable. The target recipes that attempt to divide by a standard star observation and flux calibrate the data will fail if there is no suitable standard star.

There seem to be a few common reasons why this occurs:

  • You're convinced that you have taken suitable calibration data, but oracdr refuses to use it for some reason.
  • Your obsject is about to set, so you intentionally deffered taking flat, arc and standard star observations untill after your object frames.

Starting with the first case. Oracdr will tell you why it is refusing to calibrate, even though it can be a bit crypic sometimes.

First a note on notation. We refer to flats, arcs and other similar frames as calibration frames.

The first thing to check is that oracdr knows about the calibration data which you consider to be suitable. The pipeline keeps an index of calibration frames, and looks to this index to find a suitable one when it needs one (eg to flat field some new data). It adds the details of calibration frames to this index as they pass through the pipeline, therefor if the frame (say the flat field) hasn't been reduced by the pipeline - say for some other reason you didn't have the pipeline running when you took the flat field data, the oracdr doesn't know about the frame and thus won't use it. Simply pour the calibration data through the pipeline with a command like oracdr -list NN where NN is the frame number of say the flat field, then restart the pipeline on the frame where it fell over. If you did a sequence like Flat, Arc, star, target, then you might want to simply re-start the pipeline from the start of that sequence - ie oracdr -loop flag -from NN.

If the pipeline has processed the calibration data, but still refuses to use it, examine the output of oracdr to find out why it refuses to use it. Whe oracdr requires a calibration frame, it works backwards through it's index file of that type of frame (eg flat fields. The actuall index is a human readable text file - in this case index.flat in your reduced ($ORAC_DATA_OUT) directory), checking the headers of the indexed frames against a rule set to see if they are suitable. It will use the most recent frame that matches the rules for use. Typically, the rules file specifies that the optical configuration must be the same for the calibration and data frames - ie the grating, order, wavelength, etc etc must match. With CGS4 there is the added complication that if you move the motors to change from one optical configuration to another, then attmept to move back to the original, the optics will be at slightly different positions, simply due to the inability of the motors to position that accurately. This effect is such that flats and arcs taken before the change and change-back are NOT suitable for calibrating the data. Therefor we have this concept of a configuration index - the CNFINDEX FITS header. This is simply an integer which gets incremented each time an optical configuration motor is moved. We demand a CNFINDEX match between data and calibration frames to be sure of suing appropriate calibration frames. Therefor if you took flats and arcs then moved the motors (perhaps something failed and you had to re-datum, or you accidentally ran the wrong sequence), you need to re-do your flats and arcs.

OK, now onto the second case where you deliberately don't have the calibration data because you decided to defer those observations untill after you'd observed your target (most likely because it's setting on you, or there's some other time-critical constraint). Now, or course, it's impossible for the DR to do the normall reduction sequence, but you still want it to display your data s it comes in, as best it can, so that you can see how you're doing. The best way to do this is to specify an alternate recipe wih which to reduce the data. Note. This doesn't change the recipe specified in the headers of the data files, it simply forces oracdr to use a specified recipe for the moment. The recipies you are most likely to want in this situation are the ones ending in _NOSTD (it it's trying to divide by standard and flux calibrate, yet you don't have the standard star observations yet) or the ones ending in _NOFLAT if you don't have a flat field yet.

Later on, when you come to re-process the data after you've taken the necessary calibration data, all you have to do is reduce the calibration frames before the target frames, by means of the -list option to oracdr, then when you process the target data, it will use the originally specified recipe, with the correct (albeit taken later) calibration frames.

Data Reduction Recipies for CGS4

In general, if you based your observations on the template library, the DR-RECIPE component will be all set up for you, and you won't need to chnage anything.

The ORAC-DR section of the CGS4 manual provides more info on what the individual recipies do and the observing sequences they are appropriate for. If you are unsure or if you think you will need a recipe that is not provided in our standard selection, you should contact your support scientist or the CGS4 instrument scientist well in advance of your observing run.

CGS4 data files under ORAC

Raw files in $ORAC_DATA_IN

The CGS4 raw data files are now stored as starlink HDS containers. Each file is equivalent to 1 observation, and as such contains a header component and 1 or more (actually NINT) integrations. Each integration is stored as an NDF component of the HDS file.

The filenames are thus: cYYYYMMDD_NNNNN.sdf, where YYYYMMDD is the numeric UT date (year, month and day) and NNNNN is the observation number, padded with leading zeros when necessary.

Reduced single frame files in $ORAC_DATA_OUT

First, some conventions:

The filename structure is: (PREFIX)(UTDATE)_(FRAME NUMBER)[_(EXTENSION)].sdf, where (THIS) is allways there and [THIS] is optional.

(PREFIX) is the letter 'c' if the file contains data from a single observation. It is 'gc' if the file contains data from a number of observations - ie a group.

_(EXTENSION) is used by individual primitives (think of a primitive as a single step within a recipe) for their output files. The pipeline its self keeps track of passing these files between primitives, though all the useful ones are left on the disk at the end so you can look at intermediate data products if you wish.

For example, c20000410_00123_ff.sdf would be data from a single observation, number 123, that has been flat fielded (the table later tells you the _ff means flat fielded).

So is that and HDS or an NDF?

Well, often, it depends; the pipeline passes files between primitives as either HDS or NDF, depending on which is most appropriate. For example, with 2x2 sampled data, the raw file will be an HDS container containg the 4 NDF data arrays from the 4 sample positions. If we're using a 1x1 sampled flat field, the flat field primitive will flat field each component, and write out an HDS container containg the four flat fielded data arrays as NDFs. On the otherhand, if the flat field is taken with the same sampling as the data, the pipeline will interleave the samples of both the data and the flat field before carrying out the flatfielding, thus the _ff file would be a single NDF.

Of course, some primitives will only write out single-NDF files; for example, the primitive that interleaves and/or coadds the samples into one larger image writes out _inc files, and these are allways single component NDFs.

An additional factor is that later on in the processing, the pipeline may go back to using HDS containers. This happens for example when we start extracting beams and spectra from the reduced group image - HDS containers are used, containing information for each beam - for example opt-extract profiles, and the spectra themselves, whilst they pass through the primitives that operate on the individual extracted spectra before these are combined to give the final spectrum - for example de-rippling.

File Extensions for single observation files
Extension HDS or NDF Description
_mraw either A Modifiable copy of the raw data.
_bp either Bad Pixel Mask has been applied
_rnv either Read Noise Variance added
_sbf either Subtracted Bias Frame
_acb either Added Chop Beams (used for Flats)
_scb either Subtracted Chop Beams
_pov either Includes Poisson Variance
_bgl either How BackGround Limited the integration is
_ipm either Interleave Prepared and Masked
_inc NDF Interleaved and Coadded
_ff either Data has had flat field applied
_nf either Data is a normallised flat field
_wce NDF Wavelength Calibrated by estimation. This is the equivalent of the old ro* file
_ss NDF Sky Subtracted

This figure illustrates the data flow between primitives for the reduction of a typical single frame.

Reduced group files in $ORAC_DATA_OUT

The pipeline adds the individual frames into the group as they are processed. The group number is usually the frame number of the first frame in the group. Group files are allways single NDFs.

File Extensions for group files
Extension HDS or NDF Description
No extension NDF This is simply the difference between all the main and offset beam images. These are the equivalents of the old rg* files.
_oep HDS The opt-extract profiles
_oer HDS The opt-extract profiling residuals
_oes HDS The opt-extracted spectra
_rif HDS The de-rippling flat fields
_dri HDS The de-rippled spectra
_ccs HDS Cross-Correlated and Shifted. All the beams are spectrally aligned to beam 1
_ccf HDS The Cross-Correlation Functions from forming the _ccs frames
_sp NDF Extracted Spectrum - the coaddition of all the beams
_aws NDF Aligned with Standard. Spectum is specrally aligned with the standard star
_scf NDF Standard cross-correlation function. The CCF from forming the _aws frames.
_dbs NDF Data has been divided by a standard star (including the standard star black body model).
_fc NDF Flux calibrated.

Contact: Tom Kerr. Updated: Wed Oct 27 14:30:35 HST 2004

Return to top ^