[Liwg-core] Fwd: Re: non-bfb 289 to 291

Bill Sacks sacks at ucar.edu
Wed Apr 11 05:38:53 MDT 2018


Good news: It turns out that my concern about large water and energy 
fluxes in the first year was unfounded. i.e., this was wrong:
> Incidentally: this problem with CISM initial conditions means that (I 
> think) we also would have seen big, fictitious water and energy fluxes 
> from Greenland in the first year of any simulation done since June 
> which pointed to a land initial conditions file taken directly from an 
> offline spinup. I'm not sure how many of the runs over the last year 
> were set up in this way (as opposed to using a land file from a hybrid 
> case, which would be consistent with the buggy CISM).
It turns out that the fluxes are all generated in time step -1 – a time 
step that CLM does before the real start of the simulation, and fluxes 
generated in that time step apparently do not feed back to the rest of 
the system.

So it seems that the only issue is the different glacier cover around 
the margin of the Greenland ice sheet (and slightly different glacier 
elevations).

Bill

On 4/10/18, 12:40 PM, Bill Sacks wrote:
> Hi all,
>
> We discussed this issue at the co-chairs meeting today. I'm pasting in 
> notes about that and other notes from today's co-chairs meeting, below.
>
> Cecile is starting a run to look at the impact of using incorrect 
> glacier cover in all runs since last June. This will probably be run 
> #293, which will be #289 + fixed glacier cover over Greenland. To the 
> extent that there are diffs, those diffs will probably show up most 
> over Greenland, including possibly affecting SMB diagnostics. Would 
> anyone have the time to do a comparison of #293 vs. #289 (once #293 
> has run far enough) to see if there are significant diffs? How long 
> would Cecile need to run #293 for you to do diagnostics on it?
>
> Thanks,
> Bill S
>
>
> *April 10, 2018 *
> *
> *
> *Problematic issue found from non-bfb analysis, comparing #289 and #291*
> *
> *
> One big thing Bill S was concerned about was runs since June, 2017 
> that used buggy CISM init with CLM finidat from offline spinup. These 
> runs would have had big fluxes in the first year. But, fortunately, it 
> turns out that there weren't very many runs that were configured that 
> way.
>
> The other issue here is incorrect glacier cover over Greenland. This 
> probably doesn't have too large of an effect on global climate, but 
> concern is that any change could affect the Lab Sea.
>
> One concern is that this will affect the SMB analyses that have been 
> done by the LIWG over the last year.
>
> Next step: start a new run with the new code base with a CLM initial 
> file from 2 timesteps into #291, to look at diffs due to glacier cover 
> over Greenland. But actually, we might fold WACCM forcing updates into 
> this new run, too, so that the new run has everything we want and can 
> be used in the spinup process.
>
> ----
>
> *Run #290*
>
> Background on what motivated this: WACCM put in a check that aborts if 
> co2 goes above 720, because that goes past the end of a lookup table. 
> The point of this abort was to warn users not to try to do (e.g.) a 4x 
> CO2 run with this configuration. But the check was triggered even for 
> preindustrial, leading to the discovery of a problem with diffusion.
>
> These same numeric issues affect anything being transported – 
> including H2O and aerosols.
>
> Run #290 has a fix to diffusion which has been confirmed to fix this 
> issue.
>
> Note that Lab Sea is warmer in this run, which could be a good thing. 
> In addition, ocean temperature and salinity look better over Lab Sea.
>
> Feeling is that this looks promising... will look further into whether 
> to use this change moving forward.
>
> ----
>
> *Release planning*
> *
> *
> For fully-coupled compsets: The two scientifically-supported compsets 
> will be a preindustrial (which is a cmip6 preindustrial) and 
> historical compset – but this historical will NOT be a cmip6 
> historical. There will be other compsets that are functionally but not 
> scientifically supported.
>
> The CESM2.0.0 release won't be a "cmip6" release, but the plan is for 
> it to support cmip6 pre-industrial. A follow-up release (e.g., 
> CESM2.0.1) will add support for cmip6 historical & future scenarios.
>
> JF (ecohed by Dave L): Feels that priority should be supporting the 
> fully-coupled. Shouldn't spend a lot of time on standalone compsets if 
> this detracts from having the fully-coupled release done in time.
>
> What is the timeline for having the final values of tuning parameters 
> in place? It's possible that these tuning parameters won't all be 
> finalized even by the June workshop, but sense is that we really want 
> to have a release by the June workshop.
>
> One feeling from a number of people is that we shouldn't try to have 
> /any/ cmip6 science support, even for preindustrial, in the CESM2.0.0 
> release. We would still provide scientific support for preindustrial 
> and historical in CESM2.0.0, but they wouldn't be called cmip6.
>
> The differences for preindustrial in the cmip6 release update may just 
> be ocean BGC parameters. There are also some WACCM forcing changes in 
> the pipeline, but those should be ready by May 11.
>
> ----
>
> *Next meeting: Friday*
> *
> *
> *Please look at run #290 for that.*
>
>
>
> On 4/9/18, 10:09 PM, Bill Sacks wrote:
>> For those not on cesm2control:
>>
>> -------- Original Message --------
>> Subject: 	Re: non-bfb 289 to 291
>> Date: 	Mon, 09 Apr 2018 22:02:51 -0600
>> From: 	Bill Sacks <sacks at ucar.edu>
>> To: 	Jim Edwards <jedwards at ucar.edu>
>> CC: 	cesm2control <cesm2control at cgd.ucar.edu>
>>
>>
>>
>> Jim: Again, thank you very much for tracking down the diffs between 
>> 289 and 291, and for tracking them back to fields sent from CISM.
>>
>> I have some good news and some bad news. I'm bringing cesm2control 
>> into the loop for the bad news. For the executive summary, just read 
>> the parts in bold.
>>
>> One piece of good news is that I think I understand the (or at least 
>> a) source of the 289-291 diffs. Another piece of good news is that 
>> these particular diffs are limited to Greenland. The bad news is that 
>> the changes here are non-negligible over Greenland, and indicate that 
>> all of the runs done since about last June have used somewhat wrong 
>> glacier cover over Greenland; run 291 finally fixed this (unknowingly).
>>
>> I'll start with the impact, and then describe the problem in more detail.
>>
>> The impact can be seen by diffing a CLM h0 file between these two 
>> cases and looking at the fields PCT_LANDUNIT (% of each landunit on 
>> the gridcell) and PCT_GLC_MEC (% of each glacier elevation class) 
>> (both fields are constant in time, so it doesn't matter which file 
>> you choose). See 
>> /glade/scratch/sacks/cism_comparison/diff.291_289.clm2.h0.0001-01.nc. 
>> I am attaching images showing the difference in % glacier; they are 
>> the same data, just with different color bars.
>>
>> As you can see from this figure, there are substantial differences in 
>> % glacier around the margin of the Greenland ice sheet. If we just 
>> consider the points with differences, here are statistics:
>>
>> In [13]: summarize(pct291[0,3,:,:][w]-pct289[0,3,:,:][w])
>> count    302.000000
>> mean     -10.491008
>> std       13.153102
>> min      -48.280025
>> 25%      -18.350363
>> 50%       -7.774761
>> 75%       -1.507887
>> max       30.457689
>>
>> That is, there are 302 grid cells over Greenland with differences in 
>> % glacier, and in those grid cells that differ, the percent of the 
>> grid cell covered by glacier is about 10% lower in 291 than in 289, 
>> on average. There are also diffs in mean glacier elevation in these 
>> and probably other grid cells in Greenland (not shown).
>>
>> How did this difference come about? In all of these runs, the ice 
>> sheet model (CISM) is static in time, but it dictates CLM's (static) 
>> glacier cover over Greenland in initialization. CISM's glacier cover 
>> can be set either from a restart file or from observed initial 
>> conditions, and we've gone back and forth in terms of how we do this 
>> in the CESM2 test runs.
>>
>> For all runs since about June, 2017, we've been doing a hybrid start 
>> with a refcase whose CISM restart file can be traced back to the 
>> refcase, 
>> /glade/p/cesmdata/cseg/inputdata/ccsm4_init/b.e20.B1850.f09_g17.pi_control.all.149.cism/0103-01-01/. 
>> I added a CISM restart file to that refcase to make it usable in 
>> configurations with CISM. Unfortunately, as I just discovered 
>> tonight, I did not give enough thought to the choice of CISM restart 
>> file used there: The CISM restart file we've been using was 2 years 
>> into a software test with an evolving ice sheet. (I should have used 
>> a restart file from a run with a non-evolving ice sheet; I'm not sure 
>> why I chose this one.) While ice sheets do not generally evolve much 
>> in 2 years, it is common for us to see a large jump in the first year 
>> of the simulation, and that's what is reflected here.
>>
>> In the latest CISM version (cism2_1_50, in cesm2_0_alpha10b and 
>> later), I found a way to do what we've really wanted all along: 
>> forcing CISM to use observed initial conditions whenever it's running 
>> in no-evolve mode. This wouldn't have changed answers if the refcase 
>> were set up correctly, but since I put a bad restart file into the 
>> refcase we've been using, it resulted in the answer changes that 
>> we're seeing.
>>
>> I guess this will have to be the first test case for the question, 
>> "Will we allow this answer-changing bug fix?". The refcase we've been 
>> using is definitely buggy in this respect. Furthermore, it is not 
>> directly usable in the new version of CISM (which was the motivation 
>> for making the behavior change in cism2_1_50). We could back out the 
>> change in cism2_1_50 and instead produce a patched-up refcase that 
>> gives bit-for-bit answers with #289 and works with the new version of 
>> the code. But in addition to carrying forward the bug, this path will 
>> cause problems for anyone doing hybrid runs off of other refcases. So 
>> my preference is to allow this answer change, but I recognize that 
>> this needs higher-level sign-off.
>>
>> Bill
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cgd.ucar.edu/pipermail/liwg-core/attachments/20180411/fc2c525c/attachment-0001.html>


More information about the Liwg-core mailing list