Exercise Results for Friday, June 1, 2001
L. Strow
Updated: Jun 7, 2001
Document URL: http://asl.umbc.edu/pub/airs/exercises/friday.html
We collected all sonde matchups that met Mitch's clear flag. We then subsetted from these those that were over ocean, night only, and within a 1-hr match. We then compared the B(T) at 900 to B(T) at 2626 , giving the scatterplot shown yesterday. These matchups were put into an RTP file, the NCEP model was used to extrapolate to levels where there was no sonde information, and then the level RTP file was run through klayers to generate a layers RTP file.
These layer RTP files will be place on weather in /home/strow/Data/may29 after the teleconference, names TBD (will email SCITEL). We then ran SARTA and inserted the computed radiances into the robs1 field in the RTP file.
Today we declared as clear only those matchups with abs(BT900 - BT2616) < 1K. This left us with 16 matchups. Examination of these showed that 4 of them had very different effective surface temperatures, with obs-calcs of 10-20K. We removed these from the matchup list, which is fair because the NCEP model or AVHRR SST data can be used to remove these outliers.
The remaining 12 matchups looked pretty good, so I plotted their location. See the following plot.
Note that we really only had 2 sonde matchups, and one of them is above Alaska (7 FOVS). So, we have ice, not ocean. This was because I used the landFrac flag. I assume the NCEP ice flag will be set here, so discrimination against this sonde should be easy as well in the future.
Here is our final map of the sonde locations, and as already seen in the previous plot, we've really only got 1 sonde match, with 4 FOVS matching. At least this plot shows the scan angles for this single matchup.
The following graphs all show the same data, but with a variety of zoom-ins. The mean obs and calc B(T)'s are shown in the top of each graph. The bottom panel shows (1) the mean bias, (2) it's variance, and (3) the mean bias if the NCEP model is used for the whole calculation rather than the sonde data. Note of course, that the variance is pretty meaningless, since it is just the variability between the 4 FOVs we matched.
This first plot is just an overview of the bias. We do not do well for high alititude channels since we have no truth there, as expected (we do not know what Evan used to extend past the NCEP model). As we move from 650 to higher wavenumbers, the bias lowers to 1K where the sonde information should be good, and then goes negative around 750 . It stays negative in the window region. Ozone, of course, isn't too good.
The water channels show a different sign for the sonde versus the model, we are working to plot out the profiles right now to see why. We will be doing statistics of model vs sonde next week in units of radiance. This need investigation.
In the near-IR very wild differences are seen at first, and (see later plots) some extremely quick changes to the radiances that are unphysical. Finally, at the 2380-2400 region the bias goes quite positive again for channels sensitive to the upper atmosphere where we have no information (I think). Not that the bias in the 4 and 15 micron regions agree in the sense that, say, in the 4 micron region the bias is at 2K when the B(T) is 240K, and you get the same result at 15 microns, 2K bias when B(T) is 240K.
I did not see any big jumps at array boundaries.
The final plot shows all channels, but with the bias scale zoomed in so you can see that the surface biases are about the same, which I required anyway.
In the future I will have to include water corrections (using the sonde profile) and emissivity corrections to have a better discriminator for clear.
The following graph shows the bias we observed with the single sonde, followed with the ``true'' bias that we computed as follows. The only good sonde we found was in granule 98. We read the L2Sup.Truth file and plotted the clear flag, and found that in general the clear (over ocean) FOVs were in the vicinity of the sonde. The ``true'' bias was then estimated by computing the mean difference between the biased and unbiased radiances for the center FOV in each clear golfball. The variance of this mean was almost zero. The second plot shows this ``truth'' bias.
Looks like we derived a fairly accurate bias in the CO2 temperature sounding channels, but had the wrong sign of the bias in the window region. Our ``clear'' flag determination was quite crude, so errors here are expected.
In this region the sonde bias had the same sign of the true bias, but the true bias was significantly smaller.
In this region the sonde bias had the same sign of the true bias for the large excursions, as expected. Again, our window bias is the wrong sign, and about 10X too large.
As at 15 microns, we recover the true bias with the sonde and model data reasonably accurately.