calwebb_detector1
The calwebb_detector1 module is stage 1 of the JWST Science Calibration Pipeline, and processes data taken with all instruments and modes. The input to this stage is the raw non-destructively read ramps and the output is uncalibrated slope images. A number of detector-level corrections are performed in this stage and slopes are fit to the corrected ramps. These steps are listed in Figure 1 (data flow from the top to the bottom). Since the near-infrared and mid-infrared detectors use different architectures, some steps are unique to each.
On this page
Words in bold are GUI menus/
panels or data software packages;
bold italics are buttons in GUI
tools or package parameters.
A brief description of each of the steps within calwebb_detector1 can be found below, along with links to further details (e.g., the relevant reference files) that can be found on the corresponding ReadTheDocs pages. Note, however, that the reference files themselves are all provided via CRDS. For instrument mode-specific notes on these pipeline steps, see the corresponding known issues with JWST data articles.
Figure 1. calwebb_detector1
Click on the image for a larger view.
Steps for both NIR and MIR data
Data quality initialization
ReadTheDocs documentation: Data Quality (DQ) Initialization, Error Propagation
Package name: dq_init
Data quality flags on the pixels are initialized. The pixel data quality flags are initialized based on detector-specific calibration reference files and designate permanent conditions that affect all groups for that pixel. Examples of such flags are dead pixels, hot pixels (showing excess dark signal), and low quantum efficiency pixels. The group data quality flags are initialized to zero, and these flags would be set if conditions occur that only affect some groups for a pixel. Both the pixel and group flags are updated during subsequent calibration pipeline steps as needed.
This step also initializes the uncertainties for each pixel (error arrays). Their values are modified during subsequent steps, and are propagated in the calibration pipeline using a noise model. The uncertainty from each step that contributes noise to the final measurement is separately propagated through the science calibration pipeline. Different uncertainty sources behave in different ways. Some noise sources (e.g., photon noise) are independent between integrations and others (e.g., flat field noise) are not. In addition, the spatial covariance of different sources varies. By propagating each term through the calibration pipeline, the use of each term can be customized for the processing. For example, the use of the flat field noise term is different between non-dithered and dithered observations. For the former, the noise does not reduce with the addition of more integrations while for the latter it does.
Saturation check
ReadTheDocs documentation: Saturation Detection
Package name: saturation
The analog-to-digital (ADU) values for each group are checked for saturation. The saturation level is set by detector-specific calibration reference files. The group data quality flag is set if the DN value in the group is above the reference file set saturation level.
The baseline version of this algorithm does not account for the case where not all of the frames in a group are saturated (some are and some are not). An optimal version of this algorithm that does account for this case is currently under investigation.
Reference pixel correction
ReadTheDocs documentation: Reference Pixel Correction
Package name: refpix
All the detectors have reference pixels that have the same readout electronics as regular pixels, but are not sensitive to light. These pixels are located at the edge of the detector arrays and are read out by the same amplifiers as the regular pixels. Thus, reference pixels track drifts in the readout electronics. The average reference pixel level for each readout amplifier is subtracted from the regular pixels for that same amplifier. While all instruments use reference pixel correction, only some use specific calibration reference files.
Jump detection
ReadTheDocs documentation: Jump Detection
Package name: jump
The jump detection step in the calibration pipeline flags jumps in the ramp where the ADU level between two consecutive groups is large relative to those between other consecutive pairs of groups. These ramp jumps are often caused by cosmic rays (CRs) that deposit large amounts of charge in a pixel, and the number of sigmas above the noise threshold (called the rejection threshold) is given as a parameter. Observers running the calibration pipeline manually may find that they need to increase or decrease the jump detection threshold depending on whether they notice over- or under-flagging of jumps in their data, or they may decide to increase the detection threshold in order to speed up the step calculations. This is done by updating the rejection_threshold parameter for the step.
For the baseline algorithm, the 2-point difference method is used as it is computationally fast and sufficient for measurements in the photon dominated regime (see Anderson & Gordon, 2011). An optimal version of this algorithm that detects smaller ramp jumps in the read noise regime is under investigation.
The jump detection step has been enhanced (as of pipeline build 9.0) to also detect and flag "snowballs and showers" that result from unusually energetic cosmic ray events. These are large regions of low-level jumps surrounding a heavily saturated central core, and can affect hundreds of pixels on the detector.
It should be noted that if a moving object is present in observations that are specified as fixed target, this step can can erase the signal from such an object (and vice versa; for observations that are specified as moving target, this step can erase the signal from fixed objects such as stars or galaxies which will trail across the detector during the integrations). This is discussed further in Key Differences for JWST Moving Target Observations.
Slope fitting
ReadTheDocs documentation: Ramp Fitting
Package name: ramp_fit
The slope for each ramp is determined by performing a weighted linear least squares fit. If a ramp has flagged ramp jumps, the fit is done for each jump-free segment and the resulting slopes averaged to produce a single slope per ramp. The weighting is done using the Fixsen et al. (2000) "optimal weighting" method.
An enhanced version of this algorithm that uses generalized least squares and a covariance matrix is under investigation.
NIR-specific steps
Group scale correction
ReadTheDocs documentation: Group Scale Correction
Package name: group_scale
In rare cases, groups can be taken where the number of frames in a group is not a power of 2. This results in the onboard software not dividing by the appropriate number of frames, as the onboard software does averaging by simple bit-shifting. The information on the number of actual frames in a group and the onboard assumed number of frames in the group is known. This step multiplies all the raw DN values by the appropriate ratio to provide the correct DN values for the actual number of frames averaged per group.
Note that while this step formally runs on MIRI data, no corrections need to be applied due to differences in the on-board processing and method of populating the relevant header keywords.
Superbias subtraction
ReadTheDocs documentation: Superbias Subtraction
Package name: superbias
The overall DN level in each pixel and group is offset by a bias level. The nonlinearity in a ramp is relative to this bias level. Thus, the bias level in each group for each pixel is subtracted based on detector-specific calibration reference files to provide the correct DN level for the nonlinearity correction step.
Linearity correction
ReadTheDocs documentation: Linearity Correction
Package name: linearity
The NIR detectors show a nonlinearity that is due to the gain changing. This non-linearity is well fit with a low order polynomial fit versus relative DN where relative is referenced to the bias level. The correction is done using the fitted polynomials whose parameters are provided by detector-specific calibration reference files.
For the baseline algorithm, grouped data is treated the same as for non-grouped data, and the error due to this assumption is small. An enhanced version of this algorithm that accounts for the effects of grouping on the nonlinearity correction is under investigation.
Persistence correction
ReadTheDocs documentation: Persistence Correction
Package name: persistence
The detectors suffer from persistence giving rise to faint "after images" of previous exposures that are seen in the current exposure. The persistence decays exponentially. Persistence is corrected using a trap based model of the persistence with the model details given by detector-specific calibration reference files. In addition to correcting the persistence, a pixel flag is set indicating that persistence was corrected at a detectable level.
Dark subtraction
ReadTheDocs documentation: Dark Current Subtraction
Package name: dark_current
The NIR detectors show excess signal in dark exposures. This excess signal is subtracted group-by-group using detector-specific calibration reference files.
Charge migration
ReadTheDocs documentation: Charge Migration
Package name: charge_migration
Correct for charge migration from the brighter-fatter effect (BFE) in undersampled NIRISS data.
1/f correction
ReadTheDocs documentation: 1/f correction
Package name: clean_flicker_noise
This step implements a generalized correction for 1/f noise (i.e., striping) in near-IR data.
Gain scale correction
ReadTheDocs documentation: Gain Scale Correction
Package name: gain_scale
In the case of subarray observations taken with a different gain than full array observations, the non-standard gain means that the measured DN values need to be corrected to put them into the same gain reference as full array observations, to enable all downstream processing to work seamlessly. This is done by scaling the slope and slope uncertainties appropriately using the ratio of the non-standard and standard gains.
MIR-specific steps
The MIRI detector1 pipeline performance has been studied extensively; see Morrison et al. (2023) for details.
Reset anomaly correction
ReadTheDocs documentation: Reset Anomaly Correction
Package name: reset
The MIR detectors show a transient phenomenon at the beginnings of ramps that is due to the reset. This transient is additive, and is caused by the non-ideal behavior of the field effect transistor (FET) upon resetting in the dark, causing the initial frames in an integration to be offset from their expected values. The first 12 groups in MIR ramps should have the reset anomaly subtracted. This correction is derived from dark observations and behaves similarly to the dark subtraction step. This correction is integration dependent.
For the baseline algorithm, the reset anomaly is separated from the dark subtraction step to simplify other MIR correction steps in the calibration pipeline.
First frame correction
ReadTheDocs documentation: First Frame Correction
Package name: firstframe
The 1st frame of MIR data has a transient that has not been fully characterized.
For the baseline algorithm, this 1st frame is flagged and not used further in the calibration pipeline. An enhanced version of this algorithm that corrects for this transient is under investigation.
Last frame correction
ReadTheDocs documentation: Last Frame Correction
Package name: lastframe
The MIR detectors show a transient in the last frame that is caused by the reset pattern. The last frame of the MIR detectors is a read-reset frame where 2 rows are read and then reset. Then, the next 2 rows are processed in the same manner. The transient is caused by the reset strongly changing the values in the to-be-read pixels due to the coupling between pixels.
For the baseline algorithm, the last frame is flagged and not used further in the calibration pipeline.
An enhanced version of this algorithm is under development. The amplitude of the transient seems to be well characterized by a polynomial fit to the value in the last frame itself. The correction under development will use polynomial parameters provided in detector-specific calibration reference files.
Linearity correction
ReadTheDocs documentation: Linearity Correction
Package name: linearity
The MIR detectors show a non-linearity that is due to changing quantum efficiency. This non-linearity is well fit with a low order polynomial fit versus absolute DN. The non-linearity is wavelength-dependent at wavelengths above approximately 21 μm. The correction is done using the fitted polynomials whose parameters are provided by detector and wavelength-specific calibration reference files.
RSCD correction
ReadTheDocs documentation: Reset Switch Charge Decay (RSCD) Correction
Package name: rscd
The "reset switch charge decay" is a transient seen in the 2nd and subsequent integrations in a MIR exposure. If uncorrected, this transient results in the slopes in the 2nd and higher integrations being larger than the 1st integration. This transient is well described by a decaying single exponential that is proportional to the counts at the end of the previous integration. The correction involves subtracting this exponential from the 2nd and higher integrations in multi-integration ramps where the parameters of the exponential are set by detector-specific calibration reference files.
Dark subtraction
ReadTheDocs documentation: Dark Current Subtraction
Package name: dark_current
The MIR darks show a dependence on the integration number in an exposure. The dark subtraction is done group-by-group using detector and integration specific calibration reference files.
References
Anderson & Gordon 2011, PASP, 123, 1237
Optimal Cosmic-Ray Detection for Nondestructive Read Ramps
Fixsen et al. 2000, PASP, 112, 1350
Cosmic-Ray Rejection and Readout Efficiency for Large-Area Arrays
Morrison, J. et al. 2023, PASP, 135, 5004
JWST MIRI Flight Performance: Detector Effects and Data Reduction Algorithms