From Jan.Lenaerts at Colorado.EDU Tue Mar 6 14:51:55 2018 From: Jan.Lenaerts at Colorado.EDU (Jan Lenaerts) Date: Tue, 6 Mar 2018 21:51:55 +0000 Subject: [Liwg-core] Fwd: [Cesm2control] 20th century from 280 References: Message-ID: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> Hi all, This is potentially relevant for us as well. It could be that the Greenland tundra problem has been resolved (or will be in a transient simulation) because of bug found in the energy conservation over land (as far as I have tracked the recent meetings). Don?t know the exact details though. Cheers, Jan Begin forwarded message: From: David Lawrence > Subject: Re: [Cesm2control] 20th century from 280 Date: 6 March 2018 at 14:46:55 GMT-7 To: Cecile Hannay > Cc: cesm2control > I just checked and land is definitely fluxing water out as it comes to new equilibrium in all runs post-279. It looks like the big part of the flux is done after about 50 years so could restart 284/285 with land IC from the longest of the post-279 runs. http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png It looks like it is mostly from Canadian Archipelago. Dave On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay > wrote: Hi Alice, I was planningto extent 283 to 100 years in any case. What I meant was: should we wait to reach 100-year before starting the 20th century (283_20thC) ? Maybe we can start the 20th century anyway and if 283 freezes, we can decide what to do with the 20th century (to continue or to stop it). Cecile ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier > wrote: I think it would be good to keep 283 going longer as well. It would also let us know if Keith's hypothesis about an initial pulse of fresh water (from the 262c initial condition, pre land bug fix so with deeper snow) would maybe level out over time. -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay > wrote: Should I wait until 283 reaches 100 years before starting the 20 century. I can use the IC of 280 at year 60, but this is only a 60-year run. We have seen the Laborador Sea freezing after 60 years. A better threshold to decide whether a run will freeze or not was 100 years. ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay > wrote: Hi, Attached is a plot of annual Lab Sea IFRAC and surface salinity from recent runs. The 279+ runs are fresher than 266 in the Lab Sea. I wonder the land energy fix is melting the built up snow, and producing a pulse of runoff. Keith On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier > wrote: Alright everyone, I'm the bearer of bad news: In 284 the Lab Sea froze over and 285 looks like it is about to freeze over. For these reasons, Marika, Dave, and I are not in favor of continuing 284 and 285. There are just too many uncertainties about what in the ozone or gravity wave additions could be triggering the freezing. Julio thinks we're simulating anti-shortwave radiation. ;) On that note, 280 and 283 are pretty similar. Neither freezes over and both have thinner ice than 269. The southern hemispheres look pretty similar as well. It would be good to continue 283 out longer just to be sure and to see a 20th century off this run (as Cecile details). But for now our preference is definitely 283. Cheers, Alice -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay > wrote: Hi Gokhan, We talked about stating a 20th century from 280. I am thinking that I would prefer the code from 283 (instead of 280). The two simulations are very, very similar (for the AMWG diag). The code from 283 is "more bugfree". So basically, I would like to start 283_20thC and not 280_20thC. The reason we talked about starting 280 was that that run was longer and therefore more spunup (280 is 60 years while 283 is only 30 years). If we want to use the more spunup state, we could use the initial form the end of 280 to start 283_20thC. Thoughts ? ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control - - - - - - - - - - - - - - - - - - - - - - - - - - - Jan Lenaerts Assistant Professor Department of Atmospheric and Oceanic Sciences University of Colorado Boulder Mail: 311 UCB | Boulder, CO 80309-0311 Phone: +1-303-735-5471 Office: SEEC N257 @lenaertsjan || website -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Tue Mar 6 15:07:47 2018 From: sacks at ucar.edu (Bill Sacks) Date: Tue, 06 Mar 2018 15:07:47 -0700 Subject: [Liwg-core] Fwd: [Cesm2control] 20th century from 280 In-Reply-To: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> Message-ID: <5A9F1133.4000001@ucar.edu> Thanks for pointing out this connection, Jan! For those interested in the details of the bug fix, see https://github.com/ESCOMP/ctsm/pull/307 Bill On 3/6/18, 2:51 PM, Jan Lenaerts wrote: > Hi all, > > This is potentially relevant for us as well. It *could be *that the > Greenland tundra problem has been resolved (or will be in a transient > simulation) because of bug found in the energy conservation over land > (as far as I have tracked the recent meetings). Don?t know the exact > details though. > > Cheers, > > Jan > > > > >> Begin forwarded message: >> >> *From: *David Lawrence > >> *Subject: **Re: [Cesm2control] 20th century from 280* >> *Date: *6 March 2018 at 14:46:55 GMT-7 >> *To: *Cecile Hannay > >> *Cc: *cesm2control > > >> >> I just checked and land is definitely fluxing water out as it comes >> to new equilibrium in all runs post-279. It looks like the big part >> of the flux is done after about 50 years so could restart 284/285 >> with land IC from the longest of the post-279 runs. >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png >> >> It looks like it is mostly from Canadian Archipelago. >> >> Dave >> >> >> >> On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay > > wrote: >> >> Hi Alice, >> I was planningto extent 283 to 100 years in any case. What I >> meant was: should we wait to reach 100-year before starting the >> 20th century (283_20thC) ? >> Maybe we can start the 20th century anyway and if 283 freezes, we >> can decide what to do with the 20th century (to continue or to >> stop it). >> Cecile >> >> >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> Cecile Hannay >> National Center for Atmospheric Research >> email: hannay at ucar.edu >> phone: 303-497-1327 >> webpage: http://www.cgd.ucar.edu/staff/hannay/ >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> >> >> On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier > > wrote: >> >> I think it would be good to keep 283 going longer as well. It >> would also let us know if Keith's hypothesis about an initial >> pulse of fresh water (from the 262c initial condition, pre >> land bug fix so with deeper snow) would maybe level out over >> time. >> >> -------------------------------------------------------- >> Alice K. DuVivier >> email: duvivier at ucar.edu >> phone: 303-497-1786 >> >> Associate Scientist II >> National Center for Atmospheric Research >> Climate and Global Dynamics Laboratory >> PO Box 3000, Boulder, CO 80307-3000 >> -------------------------------------------------------- >> >> >> On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay >> > wrote: >> >> Should I wait until 283 reaches 100 years before starting >> the 20 century. >> I can use the IC of 280 at year 60, but this is only a >> 60-year run. We have seen the Laborador Sea freezing >> after 60 years. A better threshold to decide whether a >> run will freeze or not was 100 years. >> >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> Cecile Hannay >> National Center for Atmospheric Research >> email: hannay at ucar.edu >> phone: 303-497-1327 >> webpage: http://www.cgd.ucar.edu/staff/hannay/ >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> >> >> On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay >> > wrote: >> >> Hi, >> >> Attached is a plot of annual Lab Sea IFRAC and >> surface salinity from recent runs. >> >> The 279+ runs are fresher than 266 in the Lab Sea. >> I wonder the land energy fix is melting the built up >> snow, >> and producing a pulse of runoff. >> >> Keith >> >> On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier >> > wrote: >> >> Alright everyone, I'm the bearer of bad news: >> In 284 >> >> the Lab Sea froze over and 285 >> >> looks like it is about to freeze over. For these >> reasons, Marika, Dave, and I are not in favor of >> continuing 284 and 285. There are just too many >> uncertainties about what in the ozone or gravity >> wave additions could be triggering the freezing. >> Julio thinks we're simulating anti-shortwave >> radiation. ;) >> >> On that note, 280 and 283 are pretty similar. >> Neither freezes over and both have thinner ice >> than 269. The southern hemispheres look pretty >> similar as well. It would be good to continue 283 >> out longer just to be sure and to see a 20th >> century off this run (as Cecile details). But for >> now our preference is definitely 283. >> >> Cheers, >> Alice >> >> >> -------------------------------------------------------- >> >> Alice K. DuVivier >> email: duvivier at ucar.edu >> phone: 303-497-1786 >> >> Associate Scientist II >> National Center for Atmospheric Research >> Climate and Global Dynamics Laboratory >> PO Box 3000, Boulder, CO 80307-3000 >> -------------------------------------------------------- >> >> >> On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay >> > wrote: >> >> Hi Gokhan, >> >> We talked about stating a 20th century from 280. >> >> I am thinking that I would prefer the code >> from 283 (instead of 280). The two >> simulations are very, very similar (for the >> AMWG diag). The code from 283 is "more bugfree". >> >> So basically, I would like to start 283_20thC >> and not 280_20thC. The reason we talked about >> starting 280 was that that run was longer and >> therefore more spunup (280 is 60 years while >> 283 is only 30 years). >> If we want to use the more spunup state, we >> could use the initial form the end of 280 to >> start 283_20thC. >> >> Thoughts ? >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> Cecile Hannay >> National Center for Atmospheric Research >> email: hannay at ucar.edu >> phone: 303-497-1327 >> webpage: >> http://www.cgd.ucar.edu/staff/hannay/ >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control > > - - - - - - - - - - - - - - - - - - - - - - - - - - - > *Jan Lenaerts > *Assistant Professor > Department of Atmospheric and Oceanic Sciences > University of Colorado Boulder > Mail: 311 UCB | Boulder, CO 80309-0311 > Phone: +1-303-735-5471 > Office: SEEC N257 > @lenaertsjan || website > > > > > > > > > > > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Wed Mar 7 11:33:33 2018 From: sacks at ucar.edu (Bill Sacks) Date: Wed, 07 Mar 2018 11:33:33 -0700 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: References: <5A9F1B99.8070000@ucar.edu> Message-ID: <5AA0307D.6070408@ucar.edu> Thanks, Sean. Jan or others: Based on Sean's reply, should we recommend using reset_snow_glc for the new runs? I don't have a good feeling for this. Bill S On 3/7/18, 11:30 AM, Sean Swenson wrote: > I think there is one change that affects phase change in snow, which I > think would apply everywhere. The biggest change was for lake, and > another change was for h2osfc, neither of which affects glaciers. I > think the biggest change right now is coming from lake. > > On Tue, Mar 6, 2018 at 3:52 PM, Bill Sacks > wrote: > > Hi Julio, > > Yes: You could reset the snow pack over non-glacier regions by > putting the following in user_nl_clm: > > reset_snow = .true. > > I'm not sure whether this is needed over glacier regions; I'm > thinking not, but in case you need it there, too, you can do: > > reset_snow_glc = .true. > reset_snow_glc_ela = 1774. > > Jan or Sean: Do you have any thoughts on whether the latter should > be included here? (Sean: I'm not sure whether the changes you made > affected glacier columns at all?) > > Bill S > > On 3/6/18, 3:45 PM, Julio Bacmeister wrote: >> Another LENS land start sounds like a good idea. >> >> It also looks like 280 is not freezing. We could run that longer >> (~100 years) and start the land from there. >> >> And a question - can we manually zero out the snow pack in the >> land restarts used in 279-284 and see what happens? >> >> On Tue, Mar 6, 2018 at 2:48 PM, David Bailey > > wrote: >> >> I hesitate to suggest this, but should we start another run >> from LENS+? I believe we start off with very little snow in >> that case? >> >> Dave >> >> On Tue, Mar 6, 2018 at 2:45 PM, Keith Oleson > > wrote: >> >> Hi, >> >> Yes, it looks like a lot of snow is melting in the >> Canadian Arctic and creating higher runoff. See here for >> 279 compared to 266: >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_landf_Canadian_Arctic.png >> >> >> The other Keith >> >> On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay >> > wrote: >> >> Hi, >> >> Attached is a plot of annual Lab Sea IFRAC and >> surface salinity from recent runs. >> >> The 279+ runs are fresher than 266 in the Lab Sea. >> I wonder the land energy fix is melting the built up >> snow, >> and producing a pulse of runoff. >> >> Keith >> >> On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier >> > wrote: >> >> Alright everyone, I'm the bearer of bad news: >> In 284 >> >> the Lab Sea froze over and 285 >> >> looks like it is about to freeze over. For these >> reasons, Marika, Dave, and I are not in favor of >> continuing 284 and 285. There are just too many >> uncertainties about what in the ozone or gravity >> wave additions could be triggering the freezing. >> Julio thinks we're simulating anti-shortwave >> radiation. ;) >> >> On that note, 280 and 283 are pretty similar. >> Neither freezes over and both have thinner ice >> than 269. The southern hemispheres look pretty >> similar as well. It would be good to continue 283 >> out longer just to be sure and to see a 20th >> century off this run (as Cecile details). But for >> now our preference is definitely 283. >> >> Cheers, >> Alice >> >> >> -------------------------------------------------------- >> Alice K. DuVivier >> email: duvivier at ucar.edu >> phone: 303-497-1786 >> >> Associate Scientist II >> National Center for Atmospheric Research >> Climate and Global Dynamics Laboratory >> PO Box 3000, Boulder, CO 80307-3000 >> -------------------------------------------------------- >> >> >> On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay >> > wrote: >> >> Hi Gokhan, >> >> We talked about stating a 20th century from 280. >> >> I am thinking that I would prefer the code >> from 283 (instead of 280). The two >> simulations are very, very similar (for the >> AMWG diag). The code from 283 is "more bugfree". >> >> So basically, I would like to start 283_20thC >> and not 280_20thC. The reason we talked about >> starting 280 was that that run was longer and >> therefore more spunup (280 is 60 years while >> 283 is only 30 years). >> If we want to use the more spunup state, we >> could use the initial form the end of 280 to >> start 283_20thC. >> >> Thoughts ? >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> Cecile Hannay >> National Center for Atmospheric Research >> email: hannay at ucar.edu >> phone: 303-497-1327 >> webpage: >> http://www.cgd.ucar.edu/staff/hannay/ >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> >> >> -- >> >> David A Bailey >> email: dbailey_at_ucar.edu >> National Center for Atmospheric Research phone: >> 303-497-1737 >> PO Box 3000 >> fax : 303-497-1700 >> Boulder, CO 80307-3000 >> http://www.cgd.ucar.edu/staff/dbailey >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Wed Mar 7 12:04:55 2018 From: sacks at ucar.edu (Bill Sacks) Date: Wed, 07 Mar 2018 12:04:55 -0700 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: <5AA0307D.6070408@ucar.edu> References: <5A9F1B99.8070000@ucar.edu> <5AA0307D.6070408@ucar.edu> Message-ID: <5AA037D7.1050403@ucar.edu> Never mind, looks like Cecile will already do this: > # 286: same as 283 > + ocean and sea-ice start from LENS at year 402 > + land and mosart starting from 262c at yr 161 > + reset_snow = .true. > + reset_snow_glc = .true. > + reset_snow_glc_ela = 1774. On 3/7/18, 11:33 AM, Bill Sacks wrote: > Thanks, Sean. > > Jan or others: Based on Sean's reply, should we recommend using > reset_snow_glc for the new runs? I don't have a good feeling for this. > > Bill S > > On 3/7/18, 11:30 AM, Sean Swenson wrote: >> I think there is one change that affects phase change in snow, which >> I think would apply everywhere. The biggest change was for lake, and >> another change was for h2osfc, neither of which affects glaciers. I >> think the biggest change right now is coming from lake. >> >> On Tue, Mar 6, 2018 at 3:52 PM, Bill Sacks > > wrote: >> >> Hi Julio, >> >> Yes: You could reset the snow pack over non-glacier regions by >> putting the following in user_nl_clm: >> >> reset_snow = .true. >> >> I'm not sure whether this is needed over glacier regions; I'm >> thinking not, but in case you need it there, too, you can do: >> >> reset_snow_glc = .true. >> reset_snow_glc_ela = 1774. >> >> Jan or Sean: Do you have any thoughts on whether the latter >> should be included here? (Sean: I'm not sure whether the changes >> you made affected glacier columns at all?) >> >> Bill S >> >> On 3/6/18, 3:45 PM, Julio Bacmeister wrote: >>> Another LENS land start sounds like a good idea. >>> >>> It also looks like 280 is not freezing. We could run that >>> longer (~100 years) and start the land from there. >>> >>> And a question - can we manually zero out the snow pack in the >>> land restarts used in 279-284 and see what happens? >>> >>> On Tue, Mar 6, 2018 at 2:48 PM, David Bailey >> > wrote: >>> >>> I hesitate to suggest this, but should we start another run >>> from LENS+? I believe we start off with very little snow in >>> that case? >>> >>> Dave >>> >>> On Tue, Mar 6, 2018 at 2:45 PM, Keith Oleson >>> > wrote: >>> >>> Hi, >>> >>> Yes, it looks like a lot of snow is melting in the >>> Canadian Arctic and creating higher runoff. See here >>> for 279 compared to 266: >>> >>> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_landf_Canadian_Arctic.png >>> >>> >>> The other Keith >>> >>> On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay >>> > wrote: >>> >>> Hi, >>> >>> Attached is a plot of annual Lab Sea IFRAC and >>> surface salinity from recent runs. >>> >>> The 279+ runs are fresher than 266 in the Lab Sea. >>> I wonder the land energy fix is melting the built up >>> snow, >>> and producing a pulse of runoff. >>> >>> Keith >>> >>> On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier >>> > wrote: >>> >>> Alright everyone, I'm the bearer of bad news: >>> In 284 >>> >>> the Lab Sea froze over and 285 >>> >>> looks like it is about to freeze over. For these >>> reasons, Marika, Dave, and I are not in favor of >>> continuing 284 and 285. There are just too many >>> uncertainties about what in the ozone or gravity >>> wave additions could be triggering the freezing. >>> Julio thinks we're simulating anti-shortwave >>> radiation. ;) >>> >>> On that note, 280 and 283 are pretty similar. >>> Neither freezes over and both have thinner ice >>> than 269. The southern hemispheres look pretty >>> similar as well. It would be good to continue >>> 283 out longer just to be sure and to see a 20th >>> century off this run (as Cecile details). But >>> for now our preference is definitely 283. >>> >>> Cheers, >>> Alice >>> >>> >>> -------------------------------------------------------- >>> Alice K. DuVivier >>> email: duvivier at ucar.edu >>> phone: 303-497-1786 >>> >>> Associate Scientist II >>> National Center for Atmospheric Research >>> Climate and Global Dynamics Laboratory >>> PO Box 3000, Boulder, CO 80307-3000 >>> -------------------------------------------------------- >>> >>> >>> On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay >>> > wrote: >>> >>> Hi Gokhan, >>> >>> We talked about stating a 20th century from >>> 280. >>> >>> I am thinking that I would prefer the code >>> from 283 (instead of 280). The two >>> simulations are very, very similar (for the >>> AMWG diag). The code from 283 is "more >>> bugfree". >>> >>> So basically, I would like to start >>> 283_20thC and not 280_20thC. The reason we >>> talked about starting 280 was that that run >>> was longer and therefore more spunup (280 is >>> 60 years while 283 is only 30 years). >>> If we want to use the more spunup state, we >>> could use the initial form the end of 280 to >>> start 283_20thC. >>> >>> Thoughts ? >>> >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> Cecile Hannay >>> National Center for Atmospheric Research >>> email: hannay at ucar.edu >>> phone: 303-497-1327 >>> webpage: >>> http://www.cgd.ucar.edu/staff/hannay/ >>> >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> >>> >>> -- >>> >>> David A Bailey >>> email: dbailey_at_ucar.edu >>> National Center for Atmospheric Research phone: >>> 303-497-1737 >>> PO Box 3000 >>> fax : 303-497-1700 >>> Boulder, CO 80307-3000 >>> http://www.cgd.ucar.edu/staff/dbailey >>> >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.vanKampenhout at uu.nl Thu Mar 8 03:10:17 2018 From: L.vanKampenhout at uu.nl (Kampenhout, L. van (Leo)) Date: Thu, 8 Mar 2018 10:10:17 +0000 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> Message-ID: <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo On 6 Mar 2018, at 22:51, Jan Lenaerts > wrote: Hi all, This is potentially relevant for us as well. It could be that the Greenland tundra problem has been resolved (or will be in a transient simulation) because of bug found in the energy conservation over land (as far as I have tracked the recent meetings). Don?t know the exact details though. Cheers, Jan Begin forwarded message: From: David Lawrence > Subject: Re: [Cesm2control] 20th century from 280 Date: 6 March 2018 at 14:46:55 GMT-7 To: Cecile Hannay > Cc: cesm2control > I just checked and land is definitely fluxing water out as it comes to new equilibrium in all runs post-279. It looks like the big part of the flux is done after about 50 years so could restart 284/285 with land IC from the longest of the post-279 runs. http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png It looks like it is mostly from Canadian Archipelago. Dave On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay > wrote: Hi Alice, I was planningto extent 283 to 100 years in any case. What I meant was: should we wait to reach 100-year before starting the 20th century (283_20thC) ? Maybe we can start the 20th century anyway and if 283 freezes, we can decide what to do with the 20th century (to continue or to stop it). Cecile ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier > wrote: I think it would be good to keep 283 going longer as well. It would also let us know if Keith's hypothesis about an initial pulse of fresh water (from the 262c initial condition, pre land bug fix so with deeper snow) would maybe level out over time. -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay > wrote: Should I wait until 283 reaches 100 years before starting the 20 century. I can use the IC of 280 at year 60, but this is only a 60-year run. We have seen the Laborador Sea freezing after 60 years. A better threshold to decide whether a run will freeze or not was 100 years. ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay > wrote: Hi, Attached is a plot of annual Lab Sea IFRAC and surface salinity from recent runs. The 279+ runs are fresher than 266 in the Lab Sea. I wonder the land energy fix is melting the built up snow, and producing a pulse of runoff. Keith On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier > wrote: Alright everyone, I'm the bearer of bad news: In 284 the Lab Sea froze over and 285 looks like it is about to freeze over. For these reasons, Marika, Dave, and I are not in favor of continuing 284 and 285. There are just too many uncertainties about what in the ozone or gravity wave additions could be triggering the freezing. Julio thinks we're simulating anti-shortwave radiation. ;) On that note, 280 and 283 are pretty similar. Neither freezes over and both have thinner ice than 269. The southern hemispheres look pretty similar as well. It would be good to continue 283 out longer just to be sure and to see a 20th century off this run (as Cecile details). But for now our preference is definitely 283. Cheers, Alice -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay > wrote: Hi Gokhan, We talked about stating a 20th century from 280. I am thinking that I would prefer the code from 283 (instead of 280). The two simulations are very, very similar (for the AMWG diag). The code from 283 is "more bugfree". So basically, I would like to start 283_20thC and not 280_20thC. The reason we talked about starting 280 was that that run was longer and therefore more spunup (280 is 60 years while 283 is only 30 years). If we want to use the more spunup state, we could use the initial form the end of 280 to start 283_20thC. Thoughts ? ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control - - - - - - - - - - - - - - - - - - - - - - - - - - - Jan Lenaerts Assistant Professor Department of Atmospheric and Oceanic Sciences University of Colorado Boulder Mail: 311 UCB | Boulder, CO 80309-0311 Phone: +1-303-735-5471 Office: SEEC N257 @lenaertsjan || website _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Muntjewerf at tudelft.nl Thu Mar 8 08:10:15 2018 From: L.Muntjewerf at tudelft.nl (Laura Muntjewerf - CITG) Date: Thu, 8 Mar 2018 15:10:15 +0000 Subject: [Liwg-core] BG-JG data space requirements Message-ID: Hi all, For the BG-JG spin-up, I put together a document on data space we require [estimate], and ways to facilitate the archiving. Please find attached. Feel welcome to comment. https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? Laura -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Thu Mar 8 08:46:59 2018 From: sacks at ucar.edu (Bill Sacks) Date: Thu, 08 Mar 2018 08:46:59 -0700 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: References: Message-ID: <5AA15AF3.4030902@ucar.edu> Hi Laura, Thanks very much for putting this together! I don't think /glade/p/cesm/liwg is a viable option right now: this falls under this nearly-full quota: /glade/p/cesm 198.48 TB 200.22 TB 99.13 % In order to determine the best space(s) for this, it would help to know: (1) How much of the data from one BG-JG iteration do you need to save once that iteration is complete? Is it really necessary to keep all of these data around, or can you delete most of it and (for example) just keep a set of restart files around to facilitate backing up or rerunning segments if you need to? (2) How much of these data need to be kept medium-long-term ? e.g., for a year, for a few years, or longer? About a year ago, CISL announced a new plan for data storage that I thought was supposed to give us all more disk space, but I haven't heard what (if anything) came of that. We can look into that if that would help. But first it would help to know how much of these data really need to be kept and for what length of time. Thanks, Bill S On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: > > Hi all, > > For the BG-JG spin-up, I put together a document on data space we > require [estimate], and ways to facilitate the archiving. > Please find attached. Feel welcome to comment. > > https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing > > Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? > > Laura > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Muntjewerf at tudelft.nl Fri Mar 9 05:32:59 2018 From: L.Muntjewerf at tudelft.nl (Laura Muntjewerf - CITG) Date: Fri, 9 Mar 2018 12:32:59 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <5AA15AF3.4030902@ucar.edu> References: <5AA15AF3.4030902@ucar.edu> Message-ID: <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> Hi Bill, Those are good questions. I suppose it involves decision-making. 1a). Inside one BG-JG iteration, we need to keep the coupler and restart files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is necessary or we hit some other problem, I think it?s good to keep around in scratch the last set of BG restart and coupler files that made a successful JG. So minimum peak scratch requirement during the spin-up, a JG[n] is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in case of problems) = 8.2 TB 1b). After completion of one BG-JG iteration for short-term storage, I suppose you are referring to the analysis that needs to be done on the output? There are a number of variables that are good to keep the complete timeseries of, but mostly we are interested in the end state. I don?t know how much exactly, but to give a rough estimate; I expect it in the order of some 100s GB per simulation. 2). For the longer term, I would like to have it integrally stored on HPSS at least for a few years. Laura On 8 Mar 2018, at 16:46, Bill Sacks > wrote: Hi Laura, Thanks very much for putting this together! I don't think /glade/p/cesm/liwg is a viable option right now: this falls under this nearly-full quota: /glade/p/cesm 198.48 TB 200.22 TB 99.13 % In order to determine the best space(s) for this, it would help to know: (1) How much of the data from one BG-JG iteration do you need to save once that iteration is complete? Is it really necessary to keep all of these data around, or can you delete most of it and (for example) just keep a set of restart files around to facilitate backing up or rerunning segments if you need to? (2) How much of these data need to be kept medium-long-term ? e.g., for a year, for a few years, or longer? About a year ago, CISL announced a new plan for data storage that I thought was supposed to give us all more disk space, but I haven't heard what (if anything) came of that. We can look into that if that would help. But first it would help to know how much of these data really need to be kept and for what length of time. Thanks, Bill S On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: Hi all, For the BG-JG spin-up, I put together a document on data space we require [estimate], and ways to facilitate the archiving. Please find attached. Feel welcome to comment. https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? Laura _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Muntjewerf at tudelft.nl Fri Mar 9 06:02:38 2018 From: L.Muntjewerf at tudelft.nl (Laura Muntjewerf - CITG) Date: Fri, 9 Mar 2018 13:02:38 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> Message-ID: <6713C498-E497-451D-B157-CAC1A623715A@tudelft.nl> Addendum; with a few years I am meaning for the duration of my phd - i.e. 4 yrs > 2). For the longer term, I would like to have it integrally stored on HPSS at least for a few years. Thanks, Laura From sacks at ucar.edu Fri Mar 9 06:14:51 2018 From: sacks at ucar.edu (Bill Sacks) Date: Fri, 09 Mar 2018 06:14:51 -0700 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> Message-ID: <5AA288CB.4030603@ucar.edu> Hi Laura, So, if I'm understanding this correctly: roughly speaking, you'll need a peak short-term storage space of close to 10 TB, and then longer-term storage of a few TB. Does that sound about right? (I'm a little confused by the value of 1.5 TB that you give for a restart set: that sounds awfully high for one set of restarts: I'd expect something more like 10s of GBs at most. But I'm not sure that the final numbers change based on revising that downwards.) So do you feel this can be accommodated by (1) temporarily doubling your scratch space, and then (2) finding a long-term storage space for a few TBs? Please let me know if I'm misunderstanding / misinterpreting this. Thanks, Bill S On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: > Hi Bill, > > Those are good questions. I suppose it involves decision-making. > > 1a). Inside one BG-JG iteration, we need to keep the coupler and > restart files from the BG. One set is ~1.5 TB. Pragmatically in case > rerunning is necessary or we hit some other problem, I think it?s good > to keep around in scratch the last set of BG restart and coupler files > that made a successful JG. > So minimum peak scratch requirement during the spin-up, a JG[n] is > about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB > BG[n-1]_forcing (in case of problems) = 8.2 TB > > 1b). After completion of one BG-JG iteration for short-term storage, I > suppose you are referring to the analysis that needs to be done on the > output? There are a number of variables that are good to keep the > complete timeseries of, but mostly we are interested in the end state. > I don?t know how much exactly, but to give a rough estimate; I expect > it in the order of some 100s GB per simulation. > > 2). For the longer term, I would like to have it integrally stored on > HPSS at least for a few years. > > > Laura > > >> On 8 Mar 2018, at 16:46, Bill Sacks > > wrote: >> >> Hi Laura, >> >> Thanks very much for putting this together! >> >> I don't think /glade/p/cesm/liwg is a viable option right now: this >> falls under this nearly-full quota: >> >> /glade/p/cesm 198.48 TB 200.22 TB 99.13 % >> >> In order to determine the best space(s) for this, it would help to know: >> >> (1) How much of the data from one BG-JG iteration do you need to save >> once that iteration is complete? Is it really necessary to keep all >> of these data around, or can you delete most of it and (for example) >> just keep a set of restart files around to facilitate backing up or >> rerunning segments if you need to? >> >> (2) How much of these data need to be kept medium-long-term ? e.g., >> for a year, for a few years, or longer? >> >> About a year ago, CISL announced a new plan for data storage that I >> thought was supposed to give us all more disk space, but I haven't >> heard what (if anything) came of that. We can look into that if that >> would help. But first it would help to know how much of these data >> really need to be kept and for what length of time. >> >> Thanks, >> Bill S >> >> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >>> >>> Hi all, >>> >>> For the BG-JG spin-up, I put together a document on data space we >>> require [estimate], and ways to facilitate the archiving. >>> Please find attached. Feel welcome to comment. >>> >>> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >>> >>> Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? >>> >>> Laura >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Muntjewerf at tudelft.nl Fri Mar 9 07:08:06 2018 From: L.Muntjewerf at tudelft.nl (Laura Muntjewerf - CITG) Date: Fri, 9 Mar 2018 14:08:06 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <5AA288CB.4030603@ucar.edu> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> Message-ID: <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> Hi Bill, Thanks for your reply. Firstly let me say that, as far as I?m aware, Marcus will be running. I am making numbers, just for practical reasons because I was looking into it before. Regarding your question on this 1.5 TB that needs to be kept from a BG: this is the restart files but also 30 years of coupler files (ha2x3h - ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t know the ha2x1d. On your point 1): yes, for effectively carrying out the spin-up a temporary doubling of the scratch space should suffice. This will require some time/coordination in moving of output in-between jobs, but that should be fine. On point 2): I would like to have all of the BG-JG long-term stored. I estimate this to be ~75 TB. Finding a long-term storage space indeed would accommodate that. Laura On 9 Mar 2018, at 14:14, Bill Sacks > wrote: Hi Laura, So, if I'm understanding this correctly: roughly speaking, you'll need a peak short-term storage space of close to 10 TB, and then longer-term storage of a few TB. Does that sound about right? (I'm a little confused by the value of 1.5 TB that you give for a restart set: that sounds awfully high for one set of restarts: I'd expect something more like 10s of GBs at most. But I'm not sure that the final numbers change based on revising that downwards.) So do you feel this can be accommodated by (1) temporarily doubling your scratch space, and then (2) finding a long-term storage space for a few TBs? Please let me know if I'm misunderstanding / misinterpreting this. Thanks, Bill S On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: Hi Bill, Those are good questions. I suppose it involves decision-making. 1a). Inside one BG-JG iteration, we need to keep the coupler and restart files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is necessary or we hit some other problem, I think it?s good to keep around in scratch the last set of BG restart and coupler files that made a successful JG. So minimum peak scratch requirement during the spin-up, a JG[n] is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in case of problems) = 8.2 TB 1b). After completion of one BG-JG iteration for short-term storage, I suppose you are referring to the analysis that needs to be done on the output? There are a number of variables that are good to keep the complete timeseries of, but mostly we are interested in the end state. I don?t know how much exactly, but to give a rough estimate; I expect it in the order of some 100s GB per simulation. 2). For the longer term, I would like to have it integrally stored on HPSS at least for a few years. Laura On 8 Mar 2018, at 16:46, Bill Sacks > wrote: Hi Laura, Thanks very much for putting this together! I don't think /glade/p/cesm/liwg is a viable option right now: this falls under this nearly-full quota: /glade/p/cesm 198.48 TB 200.22 TB 99.13 % In order to determine the best space(s) for this, it would help to know: (1) How much of the data from one BG-JG iteration do you need to save once that iteration is complete? Is it really necessary to keep all of these data around, or can you delete most of it and (for example) just keep a set of restart files around to facilitate backing up or rerunning segments if you need to? (2) How much of these data need to be kept medium-long-term ? e.g., for a year, for a few years, or longer? About a year ago, CISL announced a new plan for data storage that I thought was supposed to give us all more disk space, but I haven't heard what (if anything) came of that. We can look into that if that would help. But first it would help to know how much of these data really need to be kept and for what length of time. Thanks, Bill S On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: Hi all, For the BG-JG spin-up, I put together a document on data space we require [estimate], and ways to facilitate the archiving. Please find attached. Feel welcome to comment. https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? Laura _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From garmeson at gmail.com Fri Mar 9 09:16:53 2018 From: garmeson at gmail.com (Jeremy Fyke) Date: Fri, 09 Mar 2018 16:16:53 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> Message-ID: Hi all In my experience it?s best to keep at least 10TB of disc available for JG/BG. This covers retention of all run files for a few iterations. Most necessary to retain are the frequent coupler history files from the BG to drive the next JG, and the set of restart files needed for the next iteration JG or BG). But for analysis it?s nice to have a few iterations of history files available on demand. I?m sure one could more carefully parse things to reduce the files needed on disk. In my experience though I just ended up accidentally archiving needed files when I tried to pick and choose. So Id just recommend keeping the last few iterations on disk in full (as well as archiving things promptly). Jer On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG < L.Muntjewerf at tudelft.nl> wrote: > Hi Bill, > > Thanks for your reply. > > Firstly let me say that, as far as I?m aware, Marcus will be running. I am > making numbers, just for practical reasons because I was looking into it > before. > > Regarding your question on this 1.5 TB that needs to be kept from a BG: > this is the restart files but also 30 years of coupler files (ha2x3h - > ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t > know the ha2x1d. > > On your point 1): yes, for effectively carrying out the spin-up a > temporary doubling of the scratch space should suffice. This will require > some time/coordination in moving of output in-between jobs, but that should > be fine. > > On point 2): I would like to have all of the BG-JG long-term stored. I > estimate this to be ~75 TB. Finding a long-term storage space indeed would > accommodate that. > > Laura > > > On 9 Mar 2018, at 14:14, Bill Sacks wrote: > > Hi Laura, > > So, if I'm understanding this correctly: roughly speaking, you'll need a > peak short-term storage space of close to 10 TB, and then longer-term > storage of a few TB. Does that sound about right? (I'm a little confused by > the value of 1.5 TB that you give for a restart set: that sounds awfully > high for one set of restarts: I'd expect something more like 10s of GBs at > most. But I'm not sure that the final numbers change based on revising that > downwards.) > > So do you feel this can be accommodated by (1) temporarily doubling your > scratch space, and then (2) finding a long-term storage space for a few TBs? > > Please let me know if I'm misunderstanding / misinterpreting this. > > Thanks, > Bill S > > On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: > > Hi Bill, > > Those are good questions. I suppose it involves decision-making. > > 1a). Inside one BG-JG iteration, we need to keep the coupler and restart > files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is > necessary or we hit some other problem, I think it?s good to keep around in > scratch the last set of BG restart and coupler files that made a successful > JG. > So minimum peak scratch requirement during the spin-up, a JG[n] is about > to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in > case of problems) = 8.2 TB > > 1b). After completion of one BG-JG iteration for short-term storage, I > suppose you are referring to the analysis that needs to be done on the > output? There are a number of variables that are good to keep the complete > timeseries of, but mostly we are interested in the end state. I don?t know > how much exactly, but to give a rough estimate; I expect it in the order of > some 100s GB per simulation. > > 2). For the longer term, I would like to have it integrally stored on HPSS > at least for a few years. > > > Laura > > > On 8 Mar 2018, at 16:46, Bill Sacks wrote: > > Hi Laura, > > Thanks very much for putting this together! > > I don't think /glade/p/cesm/liwg is a viable option right now: this falls > under this nearly-full quota: > > /glade/p/cesm 198.48 TB 200.22 TB 99.13 % > > In order to determine the best space(s) for this, it would help to know: > > (1) How much of the data from one BG-JG iteration do you need to save once > that iteration is complete? Is it really necessary to keep all of these > data around, or can you delete most of it and (for example) just keep a set > of restart files around to facilitate backing up or rerunning segments if > you need to? > > (2) How much of these data need to be kept medium-long-term ? e.g., for a > year, for a few years, or longer? > > About a year ago, CISL announced a new plan for data storage that I > thought was supposed to give us all more disk space, but I haven't heard > what (if anything) came of that. We can look into that if that would help. > But first it would help to know how much of these data really need to be > kept and for what length of time. > > Thanks, > Bill S > > On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: > > > Hi all, > > For the BG-JG spin-up, I put together a document on data space we require > [estimate], and ways to facilitate the archiving. > Please find attached. Feel welcome to comment. > > > https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing > > Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? > > Laura > > _______________________________________________ > Liwg-core mailing listLiwg-core at cgd.ucar.eduhttp://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Fri Mar 9 14:57:22 2018 From: sacks at ucar.edu (Bill Sacks) Date: Fri, 09 Mar 2018 14:57:22 -0700 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> Message-ID: <5AA30342.3010408@ucar.edu> Hi Laura, I'm having trouble reconciling all the numbers here, so still feel I'm not totally understanding this. So I'll reply a bit more generally rather than trying to address specific numbers: In general, we're being asked by CISL (the computing group) to think carefully about what we really need to store long-term. Disk space (and HPSS storage) is at a higher premium than processor time these days. So along those lines, I wonder if it would be possible to reduce the long-term storage needs by reducing the number of variables output by various components (especially multi-level variables) and/or output frequency ? e.g., outputting most variables as annual rather than monthly averages for much of the spin-up period? I believe the land group uses one or both of these strategies when doing its land-only spinups. It would also be worth writing to Gary Strand . He's the person here who has the best sense of the storage landscape, and could perhaps give some suggestions for how to proceed. Bill S On 3/9/18, 9:16 AM, Jeremy Fyke wrote: > Hi all > > In my experience it?s best to keep at least 10TB of disc available for > JG/BG. This covers retention of all run files for a few iterations. > Most necessary to retain are the frequent coupler history files from > the BG to drive the next JG, and the set of restart files needed for > the next iteration JG or BG). But for analysis it?s nice to have a > few iterations of history files available on demand. > > I?m sure one could more carefully parse things to reduce the files > needed on disk. In my experience though I just ended up accidentally > archiving needed files when I tried to pick and choose. So Id just > recommend keeping the last few iterations on disk in full (as well as > archiving things promptly). > > Jer > > > On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG > > wrote: > > Hi Bill, > > Thanks for your reply. > > Firstly let me say that, as far as I?m aware, Marcus will be > running. I am making numbers, just for practical reasons because I > was looking into it before. > > Regarding your question on this 1.5 TB that needs to be kept from > a BG: this is the restart files but also 30 years of coupler files > (ha2x3h - ha2x1h - ha2x1h - ha2x1d). I am probably overestimating > because I don?t know the ha2x1d. > > On your point 1): yes, for effectively carrying out the spin-up a > temporary doubling of the scratch space should suffice. This will > require some time/coordination in moving of output in-between > jobs, but that should be fine. > > On point 2): I would like to have all of the BG-JG long-term > stored. I estimate this to be ~75 TB. Finding a long-term storage > space indeed would accommodate that. > > Laura > > >> On 9 Mar 2018, at 14:14, Bill Sacks > > wrote: >> >> Hi Laura, >> >> So, if I'm understanding this correctly: roughly speaking, you'll >> need a peak short-term storage space of close to 10 TB, and then >> longer-term storage of a few TB. Does that sound about right? >> (I'm a little confused by the value of 1.5 TB that you give for a >> restart set: that sounds awfully high for one set of restarts: >> I'd expect something more like 10s of GBs at most. But I'm not >> sure that the final numbers change based on revising that downwards.) >> >> So do you feel this can be accommodated by (1) temporarily >> doubling your scratch space, and then (2) finding a long-term >> storage space for a few TBs? >> >> Please let me know if I'm misunderstanding / misinterpreting this. >> >> Thanks, >> Bill S >> >> On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: >>> Hi Bill, >>> >>> Those are good questions. I suppose it involves decision-making. >>> >>> 1a). Inside one BG-JG iteration, we need to keep the coupler and >>> restart files from the BG. One set is ~1.5 TB. Pragmatically in >>> case rerunning is necessary or we hit some other problem, I >>> think it?s good to keep around in scratch the last set of BG >>> restart and coupler files that made a successful JG. >>> So minimum peak scratch requirement during the spin-up, a JG[n] >>> is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB >>> BG[n-1]_forcing (in case of problems) = 8.2 TB >>> >>> 1b). After completion of one BG-JG iteration for short-term >>> storage, I suppose you are referring to the analysis that needs >>> to be done on the output? There are a number of variables that >>> are good to keep the complete timeseries of, but mostly we are >>> interested in the end state. I don?t know how much exactly, but >>> to give a rough estimate; I expect it in the order of some 100s >>> GB per simulation. >>> >>> 2). For the longer term, I would like to have it integrally >>> stored on HPSS at least for a few years. >>> >>> >>> Laura >>> >>> >>>> On 8 Mar 2018, at 16:46, Bill Sacks >>> > wrote: >>>> >>>> Hi Laura, >>>> >>>> Thanks very much for putting this together! >>>> >>>> I don't think /glade/p/cesm/liwg is a viable option right now: >>>> this falls under this nearly-full quota: >>>> >>>> /glade/p/cesm 198.48 TB 200.22 TB 99.13 % >>>> >>>> In order to determine the best space(s) for this, it would help >>>> to know: >>>> >>>> (1) How much of the data from one BG-JG iteration do you need >>>> to save once that iteration is complete? Is it really necessary >>>> to keep all of these data around, or can you delete most of it >>>> and (for example) just keep a set of restart files around to >>>> facilitate backing up or rerunning segments if you need to? >>>> >>>> (2) How much of these data need to be kept medium-long-term ? >>>> e.g., for a year, for a few years, or longer? >>>> >>>> About a year ago, CISL announced a new plan for data storage >>>> that I thought was supposed to give us all more disk space, but >>>> I haven't heard what (if anything) came of that. We can look >>>> into that if that would help. But first it would help to know >>>> how much of these data really need to be kept and for what >>>> length of time. >>>> >>>> Thanks, >>>> Bill S >>>> >>>> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >>>>> >>>>> Hi all, >>>>> >>>>> For the BG-JG spin-up, I put together a document on data space >>>>> we require [estimate], and ways to facilitate the archiving. >>>>> Please find attached. Feel welcome to comment. >>>>> >>>>> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >>>>> >>>>> Now I come to think of it, maybe the /glade/p/cesm/liwg is an >>>>> option? >>>>> >>>>> Laura >>>>> _______________________________________________ >>>>> Liwg-core mailing list >>>>> Liwg-core at cgd.ucar.edu >>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Vizcaino at tudelft.nl Mon Mar 12 05:35:41 2018 From: M.Vizcaino at tudelft.nl (Miren Vizcaino) Date: Mon, 12 Mar 2018 11:35:41 +0000 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> Message-ID: Hi All, Raymond and Laura checked 279 vs 266 and found that the snow cover is going strongly down in the Canadian Archipelago, but not in Greenland From Raymond: http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html). It does not look like the Greenland snow is much affected by this bug fix as it produces similar amounts of runoff in both runs. However, it seems to have a high impact on the snow cover in Canadian Arctic. The snow w.e. reduces from ~7200mm to ~4700mm over 40 years under preindustrial conditions. Temperatures increase over the Canadian Archipelago, and also over Greenland (because of heat advection from this area). It is likely that if the model gets to simulate a seasonal snow cover over the Canadian Archipelago, we see much improved heat advection over Greenland. We are checking #286 at the moment. I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo, over which area is this? Thanks, Miren On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) > wrote: I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo On 6 Mar 2018, at 22:51, Jan Lenaerts > wrote: Hi all, This is potentially relevant for us as well. It could be that the Greenland tundra problem has been resolved (or will be in a transient simulation) because of bug found in the energy conservation over land (as far as I have tracked the recent meetings). Don?t know the exact details though. Cheers, Jan Begin forwarded message: From: David Lawrence > Subject: Re: [Cesm2control] 20th century from 280 Date: 6 March 2018 at 14:46:55 GMT-7 To: Cecile Hannay > Cc: cesm2control > I just checked and land is definitely fluxing water out as it comes to new equilibrium in all runs post-279. It looks like the big part of the flux is done after about 50 years so could restart 284/285 with land IC from the longest of the post-279 runs. http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png It looks like it is mostly from Canadian Archipelago. Dave On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay > wrote: Hi Alice, I was planningto extent 283 to 100 years in any case. What I meant was: should we wait to reach 100-year before starting the 20th century (283_20thC) ? Maybe we can start the 20th century anyway and if 283 freezes, we can decide what to do with the 20th century (to continue or to stop it). Cecile ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier > wrote: I think it would be good to keep 283 going longer as well. It would also let us know if Keith's hypothesis about an initial pulse of fresh water (from the 262c initial condition, pre land bug fix so with deeper snow) would maybe level out over time. -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay > wrote: Should I wait until 283 reaches 100 years before starting the 20 century. I can use the IC of 280 at year 60, but this is only a 60-year run. We have seen the Laborador Sea freezing after 60 years. A better threshold to decide whether a run will freeze or not was 100 years. ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay > wrote: Hi, Attached is a plot of annual Lab Sea IFRAC and surface salinity from recent runs. The 279+ runs are fresher than 266 in the Lab Sea. I wonder the land energy fix is melting the built up snow, and producing a pulse of runoff. Keith On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier > wrote: Alright everyone, I'm the bearer of bad news: In 284 the Lab Sea froze over and 285 looks like it is about to freeze over. For these reasons, Marika, Dave, and I are not in favor of continuing 284 and 285. There are just too many uncertainties about what in the ozone or gravity wave additions could be triggering the freezing. Julio thinks we're simulating anti-shortwave radiation. ;) On that note, 280 and 283 are pretty similar. Neither freezes over and both have thinner ice than 269. The southern hemispheres look pretty similar as well. It would be good to continue 283 out longer just to be sure and to see a 20th century off this run (as Cecile details). But for now our preference is definitely 283. Cheers, Alice -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay > wrote: Hi Gokhan, We talked about stating a 20th century from 280. I am thinking that I would prefer the code from 283 (instead of 280). The two simulations are very, very similar (for the AMWG diag). The code from 283 is "more bugfree". So basically, I would like to start 283_20thC and not 280_20thC. The reason we talked about starting 280 was that that run was longer and therefore more spunup (280 is 60 years while 283 is only 30 years). If we want to use the more spunup state, we could use the initial form the end of 280 to start 283_20thC. Thoughts ? ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control - - - - - - - - - - - - - - - - - - - - - - - - - - - Jan Lenaerts Assistant Professor Department of Atmospheric and Oceanic Sciences University of Colorado Boulder Mail: 311 UCB | Boulder, CO 80309-0311 Phone: +1-303-735-5471 Office: SEEC N257 @lenaertsjan || website _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Mon Mar 12 13:05:53 2018 From: sacks at ucar.edu (Bill Sacks) Date: Mon, 12 Mar 2018 13:05:53 -0600 Subject: [Liwg-core] Next liwg-core telecon: Wednesday at 9 am Mountain DAYLIGHT Time Message-ID: <5AA6CF91.7080104@ucar.edu> Hi all, The next liwg-core telecon is scheduled for this Wednesday at 9 am Mountain DAYLIGHT Time ? note that the time change to Daylight Savings Time happened in the U.S. last weekend. Editable agenda here: https://docs.google.com/document/d/1jJ_-6PNJ3hkB2xILmfaTpDxt43PhfZIpp7CvJjIqTbk/edit Call-in info (via ReadyTalk): U.S. Toll-free number: 1-866-740-1260 Netherlands Toll-free number: 08000202061 -- you must dial this as listed; you should NOT precede the number with a country code The access code is 4971358 Screen sharing (if we need it for any calls; I probably won't set this up by default): Go to www.readytalk.com ; under "join a meeting" enter access code 4971358 Bill S -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.vanKampenhout at uu.nl Mon Mar 12 13:18:10 2018 From: L.vanKampenhout at uu.nl (Kampenhout, L. van (Leo)) Date: Mon, 12 Mar 2018 19:18:10 +0000 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> Message-ID: <2146EA0B-8C90-471D-A3CC-7D29FF5241F8@uu.nl> Hi, I just checked northern Greenland. Leo On 12 Mar 2018, at 12:35, Miren Vizcaino > wrote: Hi All, Raymond and Laura checked 279 vs 266 and found that the snow cover is going strongly down in the Canadian Archipelago, but not in Greenland From Raymond: http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html). It does not look like the Greenland snow is much affected by this bug fix as it produces similar amounts of runoff in both runs. However, it seems to have a high impact on the snow cover in Canadian Arctic. The snow w.e. reduces from ~7200mm to ~4700mm over 40 years under preindustrial conditions. Temperatures increase over the Canadian Archipelago, and also over Greenland (because of heat advection from this area). It is likely that if the model gets to simulate a seasonal snow cover over the Canadian Archipelago, we see much improved heat advection over Greenland. We are checking #286 at the moment. I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo, over which area is this? Thanks, Miren On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) > wrote: I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo On 6 Mar 2018, at 22:51, Jan Lenaerts > wrote: Hi all, This is potentially relevant for us as well. It could be that the Greenland tundra problem has been resolved (or will be in a transient simulation) because of bug found in the energy conservation over land (as far as I have tracked the recent meetings). Don?t know the exact details though. Cheers, Jan Begin forwarded message: From: David Lawrence > Subject: Re: [Cesm2control] 20th century from 280 Date: 6 March 2018 at 14:46:55 GMT-7 To: Cecile Hannay > Cc: cesm2control > I just checked and land is definitely fluxing water out as it comes to new equilibrium in all runs post-279. It looks like the big part of the flux is done after about 50 years so could restart 284/285 with land IC from the longest of the post-279 runs. http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png It looks like it is mostly from Canadian Archipelago. Dave On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay > wrote: Hi Alice, I was planningto extent 283 to 100 years in any case. What I meant was: should we wait to reach 100-year before starting the 20th century (283_20thC) ? Maybe we can start the 20th century anyway and if 283 freezes, we can decide what to do with the 20th century (to continue or to stop it). Cecile ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier > wrote: I think it would be good to keep 283 going longer as well. It would also let us know if Keith's hypothesis about an initial pulse of fresh water (from the 262c initial condition, pre land bug fix so with deeper snow) would maybe level out over time. -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay > wrote: Should I wait until 283 reaches 100 years before starting the 20 century. I can use the IC of 280 at year 60, but this is only a 60-year run. We have seen the Laborador Sea freezing after 60 years. A better threshold to decide whether a run will freeze or not was 100 years. ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay > wrote: Hi, Attached is a plot of annual Lab Sea IFRAC and surface salinity from recent runs. The 279+ runs are fresher than 266 in the Lab Sea. I wonder the land energy fix is melting the built up snow, and producing a pulse of runoff. Keith On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier > wrote: Alright everyone, I'm the bearer of bad news: In 284 the Lab Sea froze over and 285 looks like it is about to freeze over. For these reasons, Marika, Dave, and I are not in favor of continuing 284 and 285. There are just too many uncertainties about what in the ozone or gravity wave additions could be triggering the freezing. Julio thinks we're simulating anti-shortwave radiation. ;) On that note, 280 and 283 are pretty similar. Neither freezes over and both have thinner ice than 269. The southern hemispheres look pretty similar as well. It would be good to continue 283 out longer just to be sure and to see a 20th century off this run (as Cecile details). But for now our preference is definitely 283. Cheers, Alice -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay > wrote: Hi Gokhan, We talked about stating a 20th century from 280. I am thinking that I would prefer the code from 283 (instead of 280). The two simulations are very, very similar (for the AMWG diag). The code from 283 is "more bugfree". So basically, I would like to start 283_20thC and not 280_20thC. The reason we talked about starting 280 was that that run was longer and therefore more spunup (280 is 60 years while 283 is only 30 years). If we want to use the more spunup state, we could use the initial form the end of 280 to start 283_20thC. Thoughts ? ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control - - - - - - - - - - - - - - - - - - - - - - - - - - - Jan Lenaerts Assistant Professor Department of Atmospheric and Oceanic Sciences University of Colorado Boulder Mail: 311 UCB | Boulder, CO 80309-0311 Phone: +1-303-735-5471 Office: SEEC N257 @lenaertsjan || website _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jan.Lenaerts at Colorado.EDU Mon Mar 12 13:57:55 2018 From: Jan.Lenaerts at Colorado.EDU (Jan Lenaerts) Date: Mon, 12 Mar 2018 19:57:55 +0000 Subject: [Liwg-core] Next liwg-core telecon: Wednesday at 9 am Mountain DAYLIGHT Time In-Reply-To: <5AA6CF91.7080104@ucar.edu> References: <5AA6CF91.7080104@ucar.edu> Message-ID: For the Dutchies: this means 16h local time. On 12 Mar 2018, at 13:05, Bill Sacks > wrote: Hi all, The next liwg-core telecon is scheduled for this Wednesday at 9 am Mountain DAYLIGHT Time ? note that the time change to Daylight Savings Time happened in the U.S. last weekend. Editable agenda here: https://docs.google.com/document/d/1jJ_-6PNJ3hkB2xILmfaTpDxt43PhfZIpp7CvJjIqTbk/edit Call-in info (via ReadyTalk): U.S. Toll-free number: 1-866-740-1260 Netherlands Toll-free number: 08000202061 -- you must dial this as listed; you should NOT precede the number with a country code The access code is 4971358 Screen sharing (if we need it for any calls; I probably won't set this up by default): Go to www.readytalk.com; under "join a meeting" enter access code 4971358 Bill S _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core - - - - - - - - - - - - - - - - - - - - - - - - - - - Jan Lenaerts Assistant Professor Department of Atmospheric and Oceanic Sciences University of Colorado Boulder Mail: 311 UCB | Boulder, CO 80309-0311 Phone: +1-303-735-5471 Office: SEEC N257 @lenaertsjan || website -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcusl at ucar.edu Mon Mar 12 17:48:35 2018 From: marcusl at ucar.edu (Marcus Lofverstrom) Date: Mon, 12 Mar 2018 17:48:35 -0600 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <5AA30342.3010408@ucar.edu> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> Message-ID: Hi Laura + Bill + Jeremy, Sorry for coming in late to this discussion, I was out of the office last week. We can easily reduce the output size of the monthly history files, as only a small number of variables are needed to ensure that the model climate is behaving as it should. Similarly, we only need to save the restart files from the end of each JG or BG run segment, which will help as well (see below). We do however need to store the high-frequency coupler history data locally, as it is used to drive the "next" JG segment. This dataset is quite large as several (2D) fields are written a few times per day over ~30 years. The good news is that we can move it from /glade to some other temporary (or indeed permanent) storage space when starting the next BG segment. The bad news is that high-frequency output is quite noisy and therefore is not compressing particularly well. Jeremy, do you have a sense for how large the coupler history data is from each BG segment? Some work has been devoted to reduce the output size of the monthly history output in the last few months. These numbers are based on the standard output from sim #288: *CAM:* 947MB per month -- 30 years is ~340 GB 2D: 242 variables 3D: 132 variables *POP:* 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB 2D: 147 variables 3D: 58 variables *CLM:* 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB 2D: 422 variables 3D: 35 variables *ICE:* 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB 2D: 104 variables 3D: 12 variables *GLC:* 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB *ROF:* 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB *Restart:* 13 GB per write One BG run (30 years of all components + one set of restart files): ~920 GB One JG run (150 years of all components but the atmosphere + one set of restart files): ~2.85 TB 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB Note that these numbers show the uncompressed size of the standard output. There are a lot of fields that we don't need in order to making sure that the model climate is behaving well. Big savings can be made by reducing the number of vertically resolved fields and only saving variables that we know that we want to look at and/or are used by the diagnostic packages. I think we can easily reduce the output size by a factor of 10, if not more. I have started a shared document (you should be able to edit) listing fields we want to save. I will work on creating a more complete list when Cheyenne is back online. https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing Best, Marcus On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks wrote: > Hi Laura, > > I'm having trouble reconciling all the numbers here, so still feel I'm not > totally understanding this. So I'll reply a bit more generally rather than > trying to address specific numbers: > > In general, we're being asked by CISL (the computing group) to think > carefully about what we really need to store long-term. Disk space (and > HPSS storage) is at a higher premium than processor time these days. So > along those lines, I wonder if it would be possible to reduce the long-term > storage needs by reducing the number of variables output by various > components (especially multi-level variables) and/or output frequency ? > e.g., outputting most variables as annual rather than monthly averages for > much of the spin-up period? I believe the land group uses one or both of > these strategies when doing its land-only spinups. > > It would also be worth writing to Gary Strand > . He's the person here who has the best sense of the > storage landscape, and could perhaps give some suggestions for how to > proceed. > > Bill S > > > On 3/9/18, 9:16 AM, Jeremy Fyke wrote: > > Hi all > > In my experience it?s best to keep at least 10TB of disc available for > JG/BG. This covers retention of all run files for a few iterations. Most > necessary to retain are the frequent coupler history files from the BG to > drive the next JG, and the set of restart files needed for the next > iteration JG or BG). But for analysis it?s nice to have a few iterations > of history files available on demand. > > I?m sure one could more carefully parse things to reduce the files needed > on disk. In my experience though I just ended up accidentally archiving > needed files when I tried to pick and choose. So Id just recommend keeping > the last few iterations on disk in full (as well as archiving things > promptly). > > Jer > > > On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG < > L.Muntjewerf at tudelft.nl> wrote: > >> Hi Bill, >> >> Thanks for your reply. >> >> Firstly let me say that, as far as I?m aware, Marcus will be running. I >> am making numbers, just for practical reasons because I was looking into it >> before. >> >> Regarding your question on this 1.5 TB that needs to be kept from a BG: >> this is the restart files but also 30 years of coupler files (ha2x3h - >> ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t >> know the ha2x1d. >> >> On your point 1): yes, for effectively carrying out the spin-up a >> temporary doubling of the scratch space should suffice. This will require >> some time/coordination in moving of output in-between jobs, but that should >> be fine. >> >> On point 2): I would like to have all of the BG-JG long-term stored. I >> estimate this to be ~75 TB. Finding a long-term storage space indeed would >> accommodate that. >> >> Laura >> >> >> On 9 Mar 2018, at 14:14, Bill Sacks wrote: >> >> Hi Laura, >> >> So, if I'm understanding this correctly: roughly speaking, you'll need a >> peak short-term storage space of close to 10 TB, and then longer-term >> storage of a few TB. Does that sound about right? (I'm a little confused by >> the value of 1.5 TB that you give for a restart set: that sounds awfully >> high for one set of restarts: I'd expect something more like 10s of GBs at >> most. But I'm not sure that the final numbers change based on revising that >> downwards.) >> >> So do you feel this can be accommodated by (1) temporarily doubling your >> scratch space, and then (2) finding a long-term storage space for a few TBs? >> >> Please let me know if I'm misunderstanding / misinterpreting this. >> >> Thanks, >> Bill S >> >> On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: >> >> Hi Bill, >> >> Those are good questions. I suppose it involves decision-making. >> >> 1a). Inside one BG-JG iteration, we need to keep the coupler and restart >> files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is >> necessary or we hit some other problem, I think it?s good to keep around in >> scratch the last set of BG restart and coupler files that made a successful >> JG. >> So minimum peak scratch requirement during the spin-up, a JG[n] is about >> to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in >> case of problems) = 8.2 TB >> >> 1b). After completion of one BG-JG iteration for short-term storage, I >> suppose you are referring to the analysis that needs to be done on the >> output? There are a number of variables that are good to keep the complete >> timeseries of, but mostly we are interested in the end state. I don?t know >> how much exactly, but to give a rough estimate; I expect it in the order of >> some 100s GB per simulation. >> >> 2). For the longer term, I would like to have it integrally stored on >> HPSS at least for a few years. >> >> >> Laura >> >> >> On 8 Mar 2018, at 16:46, Bill Sacks wrote: >> >> Hi Laura, >> >> Thanks very much for putting this together! >> >> I don't think /glade/p/cesm/liwg is a viable option right now: this falls >> under this nearly-full quota: >> >> /glade/p/cesm 198.48 TB 200.22 TB 99.13 % >> >> In order to determine the best space(s) for this, it would help to know: >> >> (1) How much of the data from one BG-JG iteration do you need to save >> once that iteration is complete? Is it really necessary to keep all of >> these data around, or can you delete most of it and (for example) just keep >> a set of restart files around to facilitate backing up or rerunning >> segments if you need to? >> >> (2) How much of these data need to be kept medium-long-term ? e.g., for a >> year, for a few years, or longer? >> >> About a year ago, CISL announced a new plan for data storage that I >> thought was supposed to give us all more disk space, but I haven't heard >> what (if anything) came of that. We can look into that if that would help. >> But first it would help to know how much of these data really need to be >> kept and for what length of time. >> >> Thanks, >> Bill S >> >> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >> >> >> Hi all, >> >> For the BG-JG spin-up, I put together a document on data space we require >> [estimate], and ways to facilitate the archiving. >> Please find attached. Feel welcome to comment. >> >> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n- >> sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >> >> Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? >> >> Laura >> >> _______________________________________________ >> Liwg-core mailing listLiwg-core at cgd.ucar.eduhttp://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> >> >> >> >> >> > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > -- Marcus L?fverstr?m (PhD) Post-doctoral researcher National Center for Atmospheric Research 1850 Table Mesa Dr. 80305 Boulder, CO, USA https://sites.google.com/site/lofverstrom/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Tue Mar 13 06:51:22 2018 From: sacks at ucar.edu (Bill Sacks) Date: Tue, 13 Mar 2018 06:51:22 -0600 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> Message-ID: <5AA7C94A.4070409@ucar.edu> Thanks a lot for this, Marcus. Bill On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: > Hi Laura + Bill + Jeremy, > > Sorry for coming in late to this discussion, I was out of the office > last week. > > We can easily reduce the output size of the monthly history files, as > only a small number of variables are needed to ensure that the model > climate is behaving as it should. Similarly, we only need to save the > restart files from the end of each JG or BG run segment, which will > help as well (see below). > > We do however need to store the high-frequency coupler history data > locally, as it is used to drive the "next" JG segment. This dataset is > quite large as several (2D) fields are written a few times per day > over ~30 years. The good news is that we can move it from /glade to > some other temporary (or indeed permanent) storage space when starting > the next BG segment. The bad news is that high-frequency output is > quite noisy and therefore is not compressing particularly well. > > Jeremy, do you have a sense for how large the coupler history data is > from each BG segment? > > > Some work has been devoted to reduce the output size of the monthly > history output in the last few months. These numbers are based on the > standard output from sim #288: > > *CAM:* > 947MB per month -- 30 years is ~340 GB > 2D: 242 variables > 3D: 132 variables > > *POP:* > 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB > 2D: 147 variables > 3D: 58 variables > > *CLM:* > 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB > 2D: 422 variables > 3D: 35 variables > > *ICE:* > 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB > 2D: 104 variables > 3D: 12 variables > > *GLC:* > 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB > > *ROF:* > 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB > > *Restart:* > 13 GB per write > > One BG run (30 years of all components + one set of restart files): > ~920 GB > > One JG run (150 years of all components but the atmosphere + one set > of restart files): > ~2.85 TB > > 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB > > Note that these numbers show the uncompressed size of the standard > output. There are a lot of fields that we don't need in order to > making sure that the model climate is behaving well. Big savings can > be made by reducing the number of vertically resolved fields and only > saving variables that we know that we want to look at and/or are used > by the diagnostic packages. > > I think we can easily reduce the output size by a factor of 10, if not > more. I have started a shared document (you should be able to edit) > listing fields we want to save. I will work on creating a more > complete list when Cheyenne is back online. > > https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing > > Best, > Marcus > > On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks > wrote: > > Hi Laura, > > I'm having trouble reconciling all the numbers here, so still feel > I'm not totally understanding this. So I'll reply a bit more > generally rather than trying to address specific numbers: > > In general, we're being asked by CISL (the computing group) to > think carefully about what we really need to store long-term. Disk > space (and HPSS storage) is at a higher premium than processor > time these days. So along those lines, I wonder if it would be > possible to reduce the long-term storage needs by reducing the > number of variables output by various components (especially > multi-level variables) and/or output frequency ? e.g., outputting > most variables as annual rather than monthly averages for much of > the spin-up period? I believe the land group uses one or both of > these strategies when doing its land-only spinups. > > It would also be worth writing to Gary Strand > . He's the person here who has the best > sense of the storage landscape, and could perhaps give some > suggestions for how to proceed. > > Bill S > > > On 3/9/18, 9:16 AM, Jeremy Fyke wrote: >> Hi all >> >> In my experience it?s best to keep at least 10TB of disc >> available for JG/BG. This covers retention of all run files for >> a few iterations. Most necessary to retain are the frequent >> coupler history files from the BG to drive the next JG, and the >> set of restart files needed for the next iteration JG or BG). >> But for analysis it?s nice to have a few iterations of history >> files available on demand. >> >> I?m sure one could more carefully parse things to reduce the >> files needed on disk. In my experience though I just ended up >> accidentally archiving needed files when I tried to pick and >> choose. So Id just recommend keeping the last few iterations on >> disk in full (as well as archiving things promptly). >> >> Jer >> >> >> On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG >> > wrote: >> >> Hi Bill, >> >> Thanks for your reply. >> >> Firstly let me say that, as far as I?m aware, Marcus will be >> running. I am making numbers, just for practical reasons >> because I was looking into it before. >> >> Regarding your question on this 1.5 TB that needs to be kept >> from a BG: this is the restart files but also 30 years of >> coupler files (ha2x3h - ha2x1h - ha2x1h - ha2x1d). I am >> probably overestimating because I don?t know the ha2x1d. >> >> On your point 1): yes, for effectively carrying out the >> spin-up a temporary doubling of the scratch space should >> suffice. This will require some time/coordination in moving >> of output in-between jobs, but that should be fine. >> >> On point 2): I would like to have all of the BG-JG long-term >> stored. I estimate this to be ~75 TB. Finding a long-term >> storage space indeed would accommodate that. >> >> Laura >> >> >>> On 9 Mar 2018, at 14:14, Bill Sacks >> > wrote: >>> >>> Hi Laura, >>> >>> So, if I'm understanding this correctly: roughly speaking, >>> you'll need a peak short-term storage space of close to 10 >>> TB, and then longer-term storage of a few TB. Does that >>> sound about right? (I'm a little confused by the value of >>> 1.5 TB that you give for a restart set: that sounds awfully >>> high for one set of restarts: I'd expect something more like >>> 10s of GBs at most. But I'm not sure that the final numbers >>> change based on revising that downwards.) >>> >>> So do you feel this can be accommodated by (1) temporarily >>> doubling your scratch space, and then (2) finding a >>> long-term storage space for a few TBs? >>> >>> Please let me know if I'm misunderstanding / misinterpreting >>> this. >>> >>> Thanks, >>> Bill S >>> >>> On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: >>>> Hi Bill, >>>> >>>> Those are good questions. I suppose it involves >>>> decision-making. >>>> >>>> 1a). Inside one BG-JG iteration, we need to keep the >>>> coupler and restart files from the BG. One set is ~1.5 TB. >>>> Pragmatically in case rerunning is necessary or we hit some >>>> other problem, I think it?s good to keep around in scratch >>>> the last set of BG restart and coupler files that made a >>>> successful JG. >>>> So minimum peak scratch requirement during the spin-up, a >>>> JG[n] is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB >>>> JG[n] + 1.5 TB BG[n-1]_forcing (in case of problems) = 8.2 TB >>>> >>>> 1b). After completion of one BG-JG iteration for short-term >>>> storage, I suppose you are referring to the analysis that >>>> needs to be done on the output? There are a number of >>>> variables that are good to keep the complete timeseries of, >>>> but mostly we are interested in the end state. I don?t know >>>> how much exactly, but to give a rough estimate; I expect it >>>> in the order of some 100s GB per simulation. >>>> >>>> 2). For the longer term, I would like to have it integrally >>>> stored on HPSS at least for a few years. >>>> >>>> >>>> Laura >>>> >>>> >>>>> On 8 Mar 2018, at 16:46, Bill Sacks >>>> > wrote: >>>>> >>>>> Hi Laura, >>>>> >>>>> Thanks very much for putting this together! >>>>> >>>>> I don't think /glade/p/cesm/liwg is a viable option right >>>>> now: this falls under this nearly-full quota: >>>>> >>>>> /glade/p/cesm 198.48 TB 200.22 TB 99.13 % >>>>> >>>>> In order to determine the best space(s) for this, it would >>>>> help to know: >>>>> >>>>> (1) How much of the data from one BG-JG iteration do you >>>>> need to save once that iteration is complete? Is it really >>>>> necessary to keep all of these data around, or can you >>>>> delete most of it and (for example) just keep a set of >>>>> restart files around to facilitate backing up or rerunning >>>>> segments if you need to? >>>>> >>>>> (2) How much of these data need to be kept >>>>> medium-long-term ? e.g., for a year, for a few years, or >>>>> longer? >>>>> >>>>> About a year ago, CISL announced a new plan for data >>>>> storage that I thought was supposed to give us all more >>>>> disk space, but I haven't heard what (if anything) came of >>>>> that. We can look into that if that would help. But first >>>>> it would help to know how much of these data really need >>>>> to be kept and for what length of time. >>>>> >>>>> Thanks, >>>>> Bill S >>>>> >>>>> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> For the BG-JG spin-up, I put together a document on data >>>>>> space we require [estimate], and ways to facilitate the >>>>>> archiving. >>>>>> Please find attached. Feel welcome to comment. >>>>>> >>>>>> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >>>>>> >>>>>> >>>>>> Now I come to think of it, maybe the /glade/p/cesm/liwg >>>>>> is an option? >>>>>> >>>>>> Laura >>>>>> _______________________________________________ >>>>>> Liwg-core mailing list >>>>>> Liwg-core at cgd.ucar.edu >>>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>>>> >>>> >>> >> > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > > > > -- > Marcus L?fverstr?m (PhD) > Post-doctoral researcher > National Center for Atmospheric Research > 1850 Table Mesa Dr. > 80305 Boulder, CO, USA > > https://sites.google.com/site/lofverstrom/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From garmeson at gmail.com Tue Mar 13 08:28:47 2018 From: garmeson at gmail.com (Jeremy Fyke) Date: Tue, 13 Mar 2018 14:28:47 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <5AA7C94A.4070409@ucar.edu> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> Message-ID: Hey, I think those estimates for JG/BG storage costs are roughly 10x what I experienced with out of box settings. Suspect that 288 is using enhanced high frequency output to diagnose bugs, etc? And no I didn't ever assess the size of the coupler files. Also, it is useful (but not critical) in practice to keep interim restart files on disk to rewind simulations as needed at short notice. Jer On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks wrote: > Thanks a lot for this, Marcus. > > > Bill > > > On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: > > Hi Laura + Bill + Jeremy, > > Sorry for coming in late to this discussion, I was out of the office last > week. > > We can easily reduce the output size of the monthly history files, as only > a small number of variables are needed to ensure that the model climate is > behaving as it should. Similarly, we only need to save the restart files > from the end of each JG or BG run segment, which will help as well (see > below). > > We do however need to store the high-frequency coupler history data > locally, as it is used to drive the "next" JG segment. This dataset is > quite large as several (2D) fields are written a few times per day over ~30 > years. The good news is that we can move it from /glade to some other > temporary (or indeed permanent) storage space when starting the next BG > segment. The bad news is that high-frequency output is quite noisy and > therefore is not compressing particularly well. > > Jeremy, do you have a sense for how large the coupler history data is from > each BG segment? > > > Some work has been devoted to reduce the output size of the monthly > history output in the last few months. These numbers are based on the > standard output from sim #288: > > *CAM:* > 947MB per month -- 30 years is ~340 GB > 2D: 242 variables > 3D: 132 variables > > *POP:* > 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB > 2D: 147 variables > 3D: 58 variables > > *CLM:* > 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB > 2D: 422 variables > 3D: 35 variables > > *ICE:* > 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB > 2D: 104 variables > 3D: 12 variables > > *GLC:* > 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB > > *ROF:* > 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB > > *Restart:* > 13 GB per write > > One BG run (30 years of all components + one set of restart files): > ~920 GB > > One JG run (150 years of all components but the atmosphere + one set of > restart files): > ~2.85 TB > > 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB > > Note that these numbers show the uncompressed size of the standard > output. There are a lot of fields that we don't need in order to making > sure that the model climate is behaving well. Big savings can be made by > reducing the number of vertically resolved fields and only saving variables > that we know that we want to look at and/or are used by the diagnostic > packages. > > I think we can easily reduce the output size by a factor of 10, if not > more. I have started a shared document (you should be able to edit) > listing fields we want to save. I will work on creating a more complete > list when Cheyenne is back online. > > > https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing > > Best, > Marcus > > On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks wrote: > >> Hi Laura, >> >> I'm having trouble reconciling all the numbers here, so still feel I'm >> not totally understanding this. So I'll reply a bit more generally rather >> than trying to address specific numbers: >> >> In general, we're being asked by CISL (the computing group) to think >> carefully about what we really need to store long-term. Disk space (and >> HPSS storage) is at a higher premium than processor time these days. So >> along those lines, I wonder if it would be possible to reduce the long-term >> storage needs by reducing the number of variables output by various >> components (especially multi-level variables) and/or output frequency ? >> e.g., outputting most variables as annual rather than monthly averages for >> much of the spin-up period? I believe the land group uses one or both of >> these strategies when doing its land-only spinups. >> >> It would also be worth writing to Gary Strand >> . He's the person here who has the best sense of the >> storage landscape, and could perhaps give some suggestions for how to >> proceed. >> >> Bill S >> >> >> On 3/9/18, 9:16 AM, Jeremy Fyke wrote: >> >> Hi all >> >> In my experience it?s best to keep at least 10TB of disc available for >> JG/BG. This covers retention of all run files for a few iterations. Most >> necessary to retain are the frequent coupler history files from the BG to >> drive the next JG, and the set of restart files needed for the next >> iteration JG or BG). But for analysis it?s nice to have a few iterations >> of history files available on demand. >> >> I?m sure one could more carefully parse things to reduce the files needed >> on disk. In my experience though I just ended up accidentally archiving >> needed files when I tried to pick and choose. So Id just recommend keeping >> the last few iterations on disk in full (as well as archiving things >> promptly). >> >> Jer >> >> >> On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG < >> L.Muntjewerf at tudelft.nl> wrote: >> >>> Hi Bill, >>> >>> Thanks for your reply. >>> >>> Firstly let me say that, as far as I?m aware, Marcus will be running. I >>> am making numbers, just for practical reasons because I was looking into it >>> before. >>> >>> Regarding your question on this 1.5 TB that needs to be kept from a BG: >>> this is the restart files but also 30 years of coupler files (ha2x3h - >>> ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t >>> know the ha2x1d. >>> >>> On your point 1): yes, for effectively carrying out the spin-up a >>> temporary doubling of the scratch space should suffice. This will require >>> some time/coordination in moving of output in-between jobs, but that should >>> be fine. >>> >>> On point 2): I would like to have all of the BG-JG long-term stored. I >>> estimate this to be ~75 TB. Finding a long-term storage space indeed would >>> accommodate that. >>> >>> Laura >>> >>> >>> On 9 Mar 2018, at 14:14, Bill Sacks wrote: >>> >>> Hi Laura, >>> >>> So, if I'm understanding this correctly: roughly speaking, you'll need a >>> peak short-term storage space of close to 10 TB, and then longer-term >>> storage of a few TB. Does that sound about right? (I'm a little confused by >>> the value of 1.5 TB that you give for a restart set: that sounds awfully >>> high for one set of restarts: I'd expect something more like 10s of GBs at >>> most. But I'm not sure that the final numbers change based on revising that >>> downwards.) >>> >>> So do you feel this can be accommodated by (1) temporarily doubling your >>> scratch space, and then (2) finding a long-term storage space for a few TBs? >>> >>> Please let me know if I'm misunderstanding / misinterpreting this. >>> >>> Thanks, >>> Bill S >>> >>> On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: >>> >>> Hi Bill, >>> >>> Those are good questions. I suppose it involves decision-making. >>> >>> 1a). Inside one BG-JG iteration, we need to keep the coupler and restart >>> files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is >>> necessary or we hit some other problem, I think it?s good to keep around in >>> scratch the last set of BG restart and coupler files that made a successful >>> JG. >>> So minimum peak scratch requirement during the spin-up, a JG[n] is about >>> to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in >>> case of problems) = 8.2 TB >>> >>> 1b). After completion of one BG-JG iteration for short-term storage, I >>> suppose you are referring to the analysis that needs to be done on the >>> output? There are a number of variables that are good to keep the complete >>> timeseries of, but mostly we are interested in the end state. I don?t know >>> how much exactly, but to give a rough estimate; I expect it in the order of >>> some 100s GB per simulation. >>> >>> 2). For the longer term, I would like to have it integrally stored on >>> HPSS at least for a few years. >>> >>> >>> Laura >>> >>> >>> On 8 Mar 2018, at 16:46, Bill Sacks wrote: >>> >>> Hi Laura, >>> >>> Thanks very much for putting this together! >>> >>> I don't think /glade/p/cesm/liwg is a viable option right now: this >>> falls under this nearly-full quota: >>> >>> /glade/p/cesm 198.48 TB 200.22 TB 99.13 % >>> >>> In order to determine the best space(s) for this, it would help to know: >>> >>> (1) How much of the data from one BG-JG iteration do you need to save >>> once that iteration is complete? Is it really necessary to keep all of >>> these data around, or can you delete most of it and (for example) just keep >>> a set of restart files around to facilitate backing up or rerunning >>> segments if you need to? >>> >>> (2) How much of these data need to be kept medium-long-term ? e.g., for >>> a year, for a few years, or longer? >>> >>> About a year ago, CISL announced a new plan for data storage that I >>> thought was supposed to give us all more disk space, but I haven't heard >>> what (if anything) came of that. We can look into that if that would help. >>> But first it would help to know how much of these data really need to be >>> kept and for what length of time. >>> >>> Thanks, >>> Bill S >>> >>> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >>> >>> >>> Hi all, >>> >>> For the BG-JG spin-up, I put together a document on data space we >>> require [estimate], and ways to facilitate the archiving. >>> Please find attached. Feel welcome to comment. >>> >>> >>> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >>> >>> Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? >>> >>> Laura >>> >>> _______________________________________________ >>> Liwg-core mailing listLiwg-core at cgd.ucar.eduhttp://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >>> >>> >>> >>> >>> >> >> _______________________________________________ >> Liwg-core mailing list >> Liwg-core at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> >> > > > -- > Marcus L?fverstr?m (PhD) > Post-doctoral researcher > National Center for Atmospheric Research > 1850 Table Mesa Dr. > > 80305 Boulder, CO, USA > > > https://sites.google.com/site/lofverstrom/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Muntjewerf at tudelft.nl Tue Mar 13 09:53:00 2018 From: L.Muntjewerf at tudelft.nl (Laura Muntjewerf - CITG) Date: Tue, 13 Mar 2018 15:53:00 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> Message-ID: <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> Hi all, Thanks Marcus, for the numbers you made. That is quite a reduction in output size from #288. For the size of the coupler files, that should be 0.75 TB. Based on a 30-year B-run code #260 [the sum of ha2x3h - ha2x1h - ha2x1h]. Earlier I said 1.5 TB but I double-counted (I merged them to use as forcing files, and took the size of the entire folder; I didn?t delete the original files). Sorry for the confusion. Bill, yes, you are right that it?s not sensible to want to keep the entire BG-JG spin-up in long-term storage. It is, I feel precautious because I don?t have much sense on what is important to save and what not - but that cannot be an argument. On this matter, I am happy there is so much input in this email thread. Laura On 13 Mar 2018, at 15:28, Jeremy Fyke > wrote: Hey, I think those estimates for JG/BG storage costs are roughly 10x what I experienced with out of box settings. Suspect that 288 is using enhanced high frequency output to diagnose bugs, etc? And no I didn't ever assess the size of the coupler files. Also, it is useful (but not critical) in practice to keep interim restart files on disk to rewind simulations as needed at short notice. Jer On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks > wrote: Thanks a lot for this, Marcus. Bill On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: Hi Laura + Bill + Jeremy, Sorry for coming in late to this discussion, I was out of the office last week. We can easily reduce the output size of the monthly history files, as only a small number of variables are needed to ensure that the model climate is behaving as it should. Similarly, we only need to save the restart files from the end of each JG or BG run segment, which will help as well (see below). We do however need to store the high-frequency coupler history data locally, as it is used to drive the "next" JG segment. This dataset is quite large as several (2D) fields are written a few times per day over ~30 years. The good news is that we can move it from /glade to some other temporary (or indeed permanent) storage space when starting the next BG segment. The bad news is that high-frequency output is quite noisy and therefore is not compressing particularly well. Jeremy, do you have a sense for how large the coupler history data is from each BG segment? Some work has been devoted to reduce the output size of the monthly history output in the last few months. These numbers are based on the standard output from sim #288: CAM: 947MB per month -- 30 years is ~340 GB 2D: 242 variables 3D: 132 variables POP: 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB 2D: 147 variables 3D: 58 variables CLM: 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB 2D: 422 variables 3D: 35 variables ICE: 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB 2D: 104 variables 3D: 12 variables GLC: 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB ROF: 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB Restart: 13 GB per write One BG run (30 years of all components + one set of restart files): ~920 GB One JG run (150 years of all components but the atmosphere + one set of restart files): ~2.85 TB 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB Note that these numbers show the uncompressed size of the standard output. There are a lot of fields that we don't need in order to making sure that the model climate is behaving well. Big savings can be made by reducing the number of vertically resolved fields and only saving variables that we know that we want to look at and/or are used by the diagnostic packages. I think we can easily reduce the output size by a factor of 10, if not more. I have started a shared document (you should be able to edit) listing fields we want to save. I will work on creating a more complete list when Cheyenne is back online. https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing Best, Marcus On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks > wrote: Hi Laura, I'm having trouble reconciling all the numbers here, so still feel I'm not totally understanding this. So I'll reply a bit more generally rather than trying to address specific numbers: In general, we're being asked by CISL (the computing group) to think carefully about what we really need to store long-term. Disk space (and HPSS storage) is at a higher premium than processor time these days. So along those lines, I wonder if it would be possible to reduce the long-term storage needs by reducing the number of variables output by various components (especially multi-level variables) and/or output frequency ? e.g., outputting most variables as annual rather than monthly averages for much of the spin-up period? I believe the land group uses one or both of these strategies when doing its land-only spinups. It would also be worth writing to Gary Strand . He's the person here who has the best sense of the storage landscape, and could perhaps give some suggestions for how to proceed. Bill S On 3/9/18, 9:16 AM, Jeremy Fyke wrote: Hi all In my experience it?s best to keep at least 10TB of disc available for JG/BG. This covers retention of all run files for a few iterations. Most necessary to retain are the frequent coupler history files from the BG to drive the next JG, and the set of restart files needed for the next iteration JG or BG). But for analysis it?s nice to have a few iterations of history files available on demand. I?m sure one could more carefully parse things to reduce the files needed on disk. In my experience though I just ended up accidentally archiving needed files when I tried to pick and choose. So Id just recommend keeping the last few iterations on disk in full (as well as archiving things promptly). Jer On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG > wrote: Hi Bill, Thanks for your reply. Firstly let me say that, as far as I?m aware, Marcus will be running. I am making numbers, just for practical reasons because I was looking into it before. Regarding your question on this 1.5 TB that needs to be kept from a BG: this is the restart files but also 30 years of coupler files (ha2x3h - ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t know the ha2x1d. On your point 1): yes, for effectively carrying out the spin-up a temporary doubling of the scratch space should suffice. This will require some time/coordination in moving of output in-between jobs, but that should be fine. On point 2): I would like to have all of the BG-JG long-term stored. I estimate this to be ~75 TB. Finding a long-term storage space indeed would accommodate that. Laura On 9 Mar 2018, at 14:14, Bill Sacks > wrote: Hi Laura, So, if I'm understanding this correctly: roughly speaking, you'll need a peak short-term storage space of close to 10 TB, and then longer-term storage of a few TB. Does that sound about right? (I'm a little confused by the value of 1.5 TB that you give for a restart set: that sounds awfully high for one set of restarts: I'd expect something more like 10s of GBs at most. But I'm not sure that the final numbers change based on revising that downwards.) So do you feel this can be accommodated by (1) temporarily doubling your scratch space, and then (2) finding a long-term storage space for a few TBs? Please let me know if I'm misunderstanding / misinterpreting this. Thanks, Bill S On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: Hi Bill, Those are good questions. I suppose it involves decision-making. 1a). Inside one BG-JG iteration, we need to keep the coupler and restart files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is necessary or we hit some other problem, I think it?s good to keep around in scratch the last set of BG restart and coupler files that made a successful JG. So minimum peak scratch requirement during the spin-up, a JG[n] is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in case of problems) = 8.2 TB 1b). After completion of one BG-JG iteration for short-term storage, I suppose you are referring to the analysis that needs to be done on the output? There are a number of variables that are good to keep the complete timeseries of, but mostly we are interested in the end state. I don?t know how much exactly, but to give a rough estimate; I expect it in the order of some 100s GB per simulation. 2). For the longer term, I would like to have it integrally stored on HPSS at least for a few years. Laura On 8 Mar 2018, at 16:46, Bill Sacks > wrote: Hi Laura, Thanks very much for putting this together! I don't think /glade/p/cesm/liwg is a viable option right now: this falls under this nearly-full quota: /glade/p/cesm 198.48 TB 200.22 TB 99.13 % In order to determine the best space(s) for this, it would help to know: (1) How much of the data from one BG-JG iteration do you need to save once that iteration is complete? Is it really necessary to keep all of these data around, or can you delete most of it and (for example) just keep a set of restart files around to facilitate backing up or rerunning segments if you need to? (2) How much of these data need to be kept medium-long-term ? e.g., for a year, for a few years, or longer? About a year ago, CISL announced a new plan for data storage that I thought was supposed to give us all more disk space, but I haven't heard what (if anything) came of that. We can look into that if that would help. But first it would help to know how much of these data really need to be kept and for what length of time. Thanks, Bill S On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: Hi all, For the BG-JG spin-up, I put together a document on data space we require [estimate], and ways to facilitate the archiving. Please find attached. Feel welcome to comment. https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? Laura _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -- Marcus L?fverstr?m (PhD) Post-doctoral researcher National Center for Atmospheric Research 1850 Table Mesa Dr. 80305 Boulder, CO, USA https://sites.google.com/site/lofverstrom/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Tue Mar 13 10:06:03 2018 From: sacks at ucar.edu (Bill Sacks) Date: Tue, 13 Mar 2018 10:06:03 -0600 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> Message-ID: <5AA7F6EB.1010403@ucar.edu> Hi all, It feels like there is some convergence on more accurate requirements, though I'm not following this closely enough to be able to come up with those numbers myself. Once someone puts together a revised estimate, I'd suggest running it by Gary Strand. Bill On 3/13/18, 9:53 AM, Laura Muntjewerf - CITG wrote: > Hi all, > > Thanks Marcus, for the numbers you made. > That is quite a reduction in output size from #288. > > For the size of the coupler files, that should be 0.75 TB. > Based on a 30-year B-run code #260 [the sum of ha2x3h - ha2x1h - ha2x1h]. > Earlier I said 1.5 TB but I double-counted (I merged them to use as > forcing files, and took the size of the entire folder; I didn?t delete > the original files). > Sorry for the confusion. > > Bill, yes, you are right that it?s not sensible to want to keep the > entire BG-JG spin-up in long-term storage. It is, I feel precautious > because I don?t have much sense on what is important to save and what > not - but that cannot be an argument. On this matter, I am happy there > is so much input in this email thread. > > Laura > > > >> On 13 Mar 2018, at 15:28, Jeremy Fyke > > wrote: >> >> Hey, >> >> I think those estimates for JG/BG storage costs are roughly 10x what >> I experienced with out of box settings. Suspect that 288 is using >> enhanced high frequency output to diagnose bugs, etc? >> >> And no I didn't ever assess the size of the coupler files. Also, it >> is useful (but not critical) in practice to keep interim restart >> files on disk to rewind simulations as needed at short notice. >> >> Jer >> >> On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks > > wrote: >> >> Thanks a lot for this, Marcus. >> >> >> Bill >> >> >> On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: >>> Hi Laura + Bill + Jeremy, >>> >>> Sorry for coming in late to this discussion, I was out of the >>> office last week. >>> >>> We can easily reduce the output size of the monthly history >>> files, as only a small number of variables are needed to ensure >>> that the model climate is behaving as it should. Similarly, we >>> only need to save the restart files from the end of each JG or >>> BG run segment, which will help as well (see below). >>> >>> We do however need to store the high-frequency coupler history >>> data locally, as it is used to drive the "next" JG segment. This >>> dataset is quite large as several (2D) fields are written a few >>> times per day over ~30 years. The good news is that we can move >>> it from /glade to some other temporary (or indeed permanent) >>> storage space when starting the next BG segment. The bad news is >>> that high-frequency output is quite noisy and therefore is not >>> compressing particularly well. >>> >>> Jeremy, do you have a sense for how large the coupler history >>> data is from each BG segment? >>> >>> >>> Some work has been devoted to reduce the output size of the >>> monthly history output in the last few months. These numbers are >>> based on the standard output from sim #288: >>> >>> *CAM:* >>> 947MB per month -- 30 years is ~340 GB >>> 2D: 242 variables >>> 3D: 132 variables >>> >>> *POP:* >>> 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB >>> 2D: 147 variables >>> 3D: 58 variables >>> >>> *CLM:* >>> 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB >>> 2D: 422 variables >>> 3D: 35 variables >>> >>> *ICE:* >>> 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB >>> 2D: 104 variables >>> 3D: 12 variables >>> >>> *GLC:* >>> 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB >>> >>> *ROF:* >>> 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB >>> >>> *Restart:* >>> 13 GB per write >>> >>> One BG run (30 years of all components + one set of restart files): >>> ~920 GB >>> >>> One JG run (150 years of all components but the atmosphere + one >>> set of restart files): >>> ~2.85 TB >>> >>> 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = >>> ~30.1 TB >>> >>> Note that these numbers show the uncompressed size of the >>> standard output. There are a lot of fields that we don't need in >>> order to making sure that the model climate is behaving well. >>> Big savings can be made by reducing the number of vertically >>> resolved fields and only saving variables that we know that we >>> want to look at and/or are used by the diagnostic packages. >>> >>> I think we can easily reduce the output size by a factor of 10, >>> if not more. I have started a shared document (you should be >>> able to edit) listing fields we want to save. I will work on >>> creating a more complete list when Cheyenne is back online. >>> >>> https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing >>> >>> Best, >>> Marcus >>> >>> On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks >> > wrote: >>> >>> Hi Laura, >>> >>> I'm having trouble reconciling all the numbers here, so >>> still feel I'm not totally understanding this. So I'll reply >>> a bit more generally rather than trying to address specific >>> numbers: >>> >>> In general, we're being asked by CISL (the computing group) >>> to think carefully about what we really need to store >>> long-term. Disk space (and HPSS storage) is at a higher >>> premium than processor time these days. So along those >>> lines, I wonder if it would be possible to reduce the >>> long-term storage needs by reducing the number of variables >>> output by various components (especially multi-level >>> variables) and/or output frequency ? e.g., outputting most >>> variables as annual rather than monthly averages for much of >>> the spin-up period? I believe the land group uses one or >>> both of these strategies when doing its land-only spinups. >>> >>> It would also be worth writing to Gary Strand >>> . He's the >>> person here who has the best sense of the storage landscape, >>> and could perhaps give some suggestions for how to proceed. >>> >>> Bill S >>> >>> >>> On 3/9/18, 9:16 AM, Jeremy Fyke wrote: >>>> Hi all >>>> >>>> In my experience it?s best to keep at least 10TB of disc >>>> available for JG/BG. This covers retention of all run >>>> files for a few iterations. Most necessary to retain are >>>> the frequent coupler history files from the BG to drive the >>>> next JG, and the set of restart files needed for the next >>>> iteration JG or BG). But for analysis it?s nice to have a >>>> few iterations of history files available on demand. >>>> >>>> I?m sure one could more carefully parse things to reduce >>>> the files needed on disk. In my experience though I just >>>> ended up accidentally archiving needed files when I tried >>>> to pick and choose. So Id just recommend keeping the last >>>> few iterations on disk in full (as well as archiving things >>>> promptly). >>>> >>>> Jer >>>> >>>> >>>> On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG >>>> > >>>> wrote: >>>> >>>> Hi Bill, >>>> >>>> Thanks for your reply. >>>> >>>> Firstly let me say that, as far as I?m aware, Marcus >>>> will be running. I am making numbers, just for >>>> practical reasons because I was looking into it before. >>>> >>>> Regarding your question on this 1.5 TB that needs to be >>>> kept from a BG: this is the restart files but also 30 >>>> years of coupler files (ha2x3h - ha2x1h - ha2x1h - >>>> ha2x1d). I am probably overestimating because I don?t >>>> know the ha2x1d. >>>> >>>> On your point 1): yes, for effectively carrying out the >>>> spin-up a temporary doubling of the scratch space >>>> should suffice. This will require some >>>> time/coordination in moving of output in-between jobs, >>>> but that should be fine. >>>> >>>> On point 2): I would like to have all of the BG-JG >>>> long-term stored. I estimate this to be ~75 TB. Finding >>>> a long-term storage space indeed would accommodate that. >>>> >>>> Laura >>>> >>>> >>>>> On 9 Mar 2018, at 14:14, Bill Sacks >>>> > wrote: >>>>> >>>>> Hi Laura, >>>>> >>>>> So, if I'm understanding this correctly: roughly >>>>> speaking, you'll need a peak short-term storage space >>>>> of close to 10 TB, and then longer-term storage of a >>>>> few TB. Does that sound about right? (I'm a little >>>>> confused by the value of 1.5 TB that you give for a >>>>> restart set: that sounds awfully high for one set of >>>>> restarts: I'd expect something more like 10s of GBs at >>>>> most. But I'm not sure that the final numbers change >>>>> based on revising that downwards.) >>>>> >>>>> So do you feel this can be accommodated by (1) >>>>> temporarily doubling your scratch space, and then (2) >>>>> finding a long-term storage space for a few TBs? >>>>> >>>>> Please let me know if I'm misunderstanding / >>>>> misinterpreting this. >>>>> >>>>> Thanks, >>>>> Bill S >>>>> >>>>> On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: >>>>>> Hi Bill, >>>>>> >>>>>> Those are good questions. I suppose it involves >>>>>> decision-making. >>>>>> >>>>>> 1a). Inside one BG-JG iteration, we need to keep the >>>>>> coupler and restart files from the BG. One set is >>>>>> ~1.5 TB. Pragmatically in case rerunning is necessary >>>>>> or we hit some other problem, I think it?s good to >>>>>> keep around in scratch the last set of BG restart and >>>>>> coupler files that made a successful JG. >>>>>> So minimum peak scratch requirement during the >>>>>> spin-up, a JG[n] is about to finish: 1.5 TB >>>>>> BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing >>>>>> (in case of problems) = 8.2 TB >>>>>> >>>>>> 1b). After completion of one BG-JG iteration for >>>>>> short-term storage, I suppose you are referring to >>>>>> the analysis that needs to be done on the output? >>>>>> There are a number of variables that are good to keep >>>>>> the complete timeseries of, but mostly we are >>>>>> interested in the end state. I don?t know how much >>>>>> exactly, but to give a rough estimate; I expect it in >>>>>> the order of some 100s GB per simulation. >>>>>> >>>>>> 2). For the longer term, I would like to have it >>>>>> integrally stored on HPSS at least for a few years. >>>>>> >>>>>> >>>>>> Laura >>>>>> >>>>>> >>>>>>> On 8 Mar 2018, at 16:46, Bill Sacks >>>>>> > wrote: >>>>>>> >>>>>>> Hi Laura, >>>>>>> >>>>>>> Thanks very much for putting this together! >>>>>>> >>>>>>> I don't think /glade/p/cesm/liwg is a viable option >>>>>>> right now: this falls under this nearly-full quota: >>>>>>> >>>>>>> /glade/p/cesm 198.48 TB 200.22 >>>>>>> TB 99.13 % >>>>>>> >>>>>>> In order to determine the best space(s) for this, it >>>>>>> would help to know: >>>>>>> >>>>>>> (1) How much of the data from one BG-JG iteration do >>>>>>> you need to save once that iteration is complete? Is >>>>>>> it really necessary to keep all of these data >>>>>>> around, or can you delete most of it and (for >>>>>>> example) just keep a set of restart files around to >>>>>>> facilitate backing up or rerunning segments if you >>>>>>> need to? >>>>>>> >>>>>>> (2) How much of these data need to be kept >>>>>>> medium-long-term ? e.g., for a year, for a few >>>>>>> years, or longer? >>>>>>> >>>>>>> About a year ago, CISL announced a new plan for data >>>>>>> storage that I thought was supposed to give us all >>>>>>> more disk space, but I haven't heard what (if >>>>>>> anything) came of that. We can look into that if >>>>>>> that would help. But first it would help to know how >>>>>>> much of these data really need to be kept and for >>>>>>> what length of time. >>>>>>> >>>>>>> Thanks, >>>>>>> Bill S >>>>>>> >>>>>>> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >>>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> For the BG-JG spin-up, I put together a document on >>>>>>>> data space we require [estimate], and ways to >>>>>>>> facilitate the archiving. >>>>>>>> Please find attached. Feel welcome to comment. >>>>>>>> >>>>>>>> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >>>>>>>> >>>>>>>> Now I come to think of it, maybe the >>>>>>>> /glade/p/cesm/liwg is an option? >>>>>>>> >>>>>>>> Laura >>>>>>>> _______________________________________________ >>>>>>>> Liwg-core mailing list >>>>>>>> Liwg-core at cgd.ucar.edu >>>>>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>>>>>> >>>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >>> >>> >>> >>> -- >>> Marcus L?fverstr?m (PhD) >>> Post-doctoral researcher >>> National Center for Atmospheric Research >>> 1850 Table Mesa Dr. >>> >>> 80305 Boulder, CO, USA >>> >>> >>> https://sites.google.com/site/lofverstrom/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Muntjewerf at tudelft.nl Tue Mar 13 10:22:45 2018 From: L.Muntjewerf at tudelft.nl (Laura Muntjewerf - CITG) Date: Tue, 13 Mar 2018 16:22:45 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <5AA7F6EB.1010403@ucar.edu> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> <5AA7F6EB.1010403@ucar.edu> Message-ID: Hi all, Thanks Bill, that seems pragmatic. @Marcus, size of individual daily coupler files: - 36 MB for ha2x3h - 21 MB for ha2x1hi - 11 MB for ha2x1h Laura On 13 Mar 2018, at 17:06, Bill Sacks > wrote: Hi all, It feels like there is some convergence on more accurate requirements, though I'm not following this closely enough to be able to come up with those numbers myself. Once someone puts together a revised estimate, I'd suggest running it by Gary Strand. Bill On 3/13/18, 9:53 AM, Laura Muntjewerf - CITG wrote: Hi all, Thanks Marcus, for the numbers you made. That is quite a reduction in output size from #288. For the size of the coupler files, that should be 0.75 TB. Based on a 30-year B-run code #260 [the sum of ha2x3h - ha2x1h - ha2x1h]. Earlier I said 1.5 TB but I double-counted (I merged them to use as forcing files, and took the size of the entire folder; I didn?t delete the original files). Sorry for the confusion. Bill, yes, you are right that it?s not sensible to want to keep the entire BG-JG spin-up in long-term storage. It is, I feel precautious because I don?t have much sense on what is important to save and what not - but that cannot be an argument. On this matter, I am happy there is so much input in this email thread. Laura On 13 Mar 2018, at 15:28, Jeremy Fyke > wrote: Hey, I think those estimates for JG/BG storage costs are roughly 10x what I experienced with out of box settings. Suspect that 288 is using enhanced high frequency output to diagnose bugs, etc? And no I didn't ever assess the size of the coupler files. Also, it is useful (but not critical) in practice to keep interim restart files on disk to rewind simulations as needed at short notice. Jer On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks > wrote: Thanks a lot for this, Marcus. Bill On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: Hi Laura + Bill + Jeremy, Sorry for coming in late to this discussion, I was out of the office last week. We can easily reduce the output size of the monthly history files, as only a small number of variables are needed to ensure that the model climate is behaving as it should. Similarly, we only need to save the restart files from the end of each JG or BG run segment, which will help as well (see below). We do however need to store the high-frequency coupler history data locally, as it is used to drive the "next" JG segment. This dataset is quite large as several (2D) fields are written a few times per day over ~30 years. The good news is that we can move it from /glade to some other temporary (or indeed permanent) storage space when starting the next BG segment. The bad news is that high-frequency output is quite noisy and therefore is not compressing particularly well. Jeremy, do you have a sense for how large the coupler history data is from each BG segment? Some work has been devoted to reduce the output size of the monthly history output in the last few months. These numbers are based on the standard output from sim #288: CAM: 947MB per month -- 30 years is ~340 GB 2D: 242 variables 3D: 132 variables POP: 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB 2D: 147 variables 3D: 58 variables CLM: 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB 2D: 422 variables 3D: 35 variables ICE: 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB 2D: 104 variables 3D: 12 variables GLC: 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB ROF: 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB Restart: 13 GB per write One BG run (30 years of all components + one set of restart files): ~920 GB One JG run (150 years of all components but the atmosphere + one set of restart files): ~2.85 TB 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB Note that these numbers show the uncompressed size of the standard output. There are a lot of fields that we don't need in order to making sure that the model climate is behaving well. Big savings can be made by reducing the number of vertically resolved fields and only saving variables that we know that we want to look at and/or are used by the diagnostic packages. I think we can easily reduce the output size by a factor of 10, if not more. I have started a shared document (you should be able to edit) listing fields we want to save. I will work on creating a more complete list when Cheyenne is back online. https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing Best, Marcus On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks > wrote: Hi Laura, I'm having trouble reconciling all the numbers here, so still feel I'm not totally understanding this. So I'll reply a bit more generally rather than trying to address specific numbers: In general, we're being asked by CISL (the computing group) to think carefully about what we really need to store long-term. Disk space (and HPSS storage) is at a higher premium than processor time these days. So along those lines, I wonder if it would be possible to reduce the long-term storage needs by reducing the number of variables output by various components (especially multi-level variables) and/or output frequency ? e.g., outputting most variables as annual rather than monthly averages for much of the spin-up period? I believe the land group uses one or both of these strategies when doing its land-only spinups. It would also be worth writing to Gary Strand . He's the person here who has the best sense of the storage landscape, and could perhaps give some suggestions for how to proceed. Bill S On 3/9/18, 9:16 AM, Jeremy Fyke wrote: Hi all In my experience it?s best to keep at least 10TB of disc available for JG/BG. This covers retention of all run files for a few iterations. Most necessary to retain are the frequent coupler history files from the BG to drive the next JG, and the set of restart files needed for the next iteration JG or BG). But for analysis it?s nice to have a few iterations of history files available on demand. I?m sure one could more carefully parse things to reduce the files needed on disk. In my experience though I just ended up accidentally archiving needed files when I tried to pick and choose. So Id just recommend keeping the last few iterations on disk in full (as well as archiving things promptly). Jer On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG > wrote: Hi Bill, Thanks for your reply. Firstly let me say that, as far as I?m aware, Marcus will be running. I am making numbers, just for practical reasons because I was looking into it before. Regarding your question on this 1.5 TB that needs to be kept from a BG: this is the restart files but also 30 years of coupler files (ha2x3h - ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t know the ha2x1d. On your point 1): yes, for effectively carrying out the spin-up a temporary doubling of the scratch space should suffice. This will require some time/coordination in moving of output in-between jobs, but that should be fine. On point 2): I would like to have all of the BG-JG long-term stored. I estimate this to be ~75 TB. Finding a long-term storage space indeed would accommodate that. Laura On 9 Mar 2018, at 14:14, Bill Sacks > wrote: Hi Laura, So, if I'm understanding this correctly: roughly speaking, you'll need a peak short-term storage space of close to 10 TB, and then longer-term storage of a few TB. Does that sound about right? (I'm a little confused by the value of 1.5 TB that you give for a restart set: that sounds awfully high for one set of restarts: I'd expect something more like 10s of GBs at most. But I'm not sure that the final numbers change based on revising that downwards.) So do you feel this can be accommodated by (1) temporarily doubling your scratch space, and then (2) finding a long-term storage space for a few TBs? Please let me know if I'm misunderstanding / misinterpreting this. Thanks, Bill S On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: Hi Bill, Those are good questions. I suppose it involves decision-making. 1a). Inside one BG-JG iteration, we need to keep the coupler and restart files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is necessary or we hit some other problem, I think it?s good to keep around in scratch the last set of BG restart and coupler files that made a successful JG. So minimum peak scratch requirement during the spin-up, a JG[n] is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in case of problems) = 8.2 TB 1b). After completion of one BG-JG iteration for short-term storage, I suppose you are referring to the analysis that needs to be done on the output? There are a number of variables that are good to keep the complete timeseries of, but mostly we are interested in the end state. I don?t know how much exactly, but to give a rough estimate; I expect it in the order of some 100s GB per simulation. 2). For the longer term, I would like to have it integrally stored on HPSS at least for a few years. Laura On 8 Mar 2018, at 16:46, Bill Sacks > wrote: Hi Laura, Thanks very much for putting this together! I don't think /glade/p/cesm/liwg is a viable option right now: this falls under this nearly-full quota: /glade/p/cesm 198.48 TB 200.22 TB 99.13 % In order to determine the best space(s) for this, it would help to know: (1) How much of the data from one BG-JG iteration do you need to save once that iteration is complete? Is it really necessary to keep all of these data around, or can you delete most of it and (for example) just keep a set of restart files around to facilitate backing up or rerunning segments if you need to? (2) How much of these data need to be kept medium-long-term ? e.g., for a year, for a few years, or longer? About a year ago, CISL announced a new plan for data storage that I thought was supposed to give us all more disk space, but I haven't heard what (if anything) came of that. We can look into that if that would help. But first it would help to know how much of these data really need to be kept and for what length of time. Thanks, Bill S On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: Hi all, For the BG-JG spin-up, I put together a document on data space we require [estimate], and ways to facilitate the archiving. Please find attached. Feel welcome to comment. https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? Laura _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -- Marcus L?fverstr?m (PhD) Post-doctoral researcher National Center for Atmospheric Research 1850 Table Mesa Dr. 80305 Boulder, CO, USA https://sites.google.com/site/lofverstrom/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Vizcaino at tudelft.nl Wed Mar 14 08:30:51 2018 From: M.Vizcaino at tudelft.nl (Miren Vizcaino) Date: Wed, 14 Mar 2018 14:30:51 +0000 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> Message-ID: <9F98BBFD-826D-4C24-B626-87E08ED7F387@tudelft.nl> Hi, Raymond checked #286 and the snow cover is growing both in Canadian Archipelago and Greenland, http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.286/lnd/b.e20.B1850.f09_g17.pi_control.all.286.11_20-b.e20.B1850.f09_g17.pi_control.all.280.11_20/set6/set6_landf_Greenland.png which in part relates to the fact that the snow has been reset in this run, while it was not in #279 (see below) The run stops at year 20, does anybody know why? On a related topic to the Canadian Archipelago, Raymond did a sensitivity run on climate impact of the repartition, and found that it increases the permanent snow cover area in Canadian Archipelago and NW Canada (feedback more snow, cooling, more snow), cools a large North American area and Greenland by 1 K in the annual mean and 4 K in DJF, and results in changes in the ITCZ (see plots below) If the snow cover over the Canadian Archipelago is such an issue at the moment, there are two things that can be tried: - turning off the repartition - lowering the snow cap for non-glaciated regions At Delft we have a problem with the permanent snow cover over the Canadian Archipielago, since it results in advected cold air over Greenland. Cheers, Miren On Mar 12, 2018, at 12:35 PM, Miren Vizcaino > wrote: Hi All, Raymond and Laura checked 279 vs 266 and found that the snow cover is going strongly down in the Canadian Archipelago, but not in Greenland From Raymond: http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html). It does not look like the Greenland snow is much affected by this bug fix as it produces similar amounts of runoff in both runs. However, it seems to have a high impact on the snow cover in Canadian Arctic. The snow w.e. reduces from ~7200mm to ~4700mm over 40 years under preindustrial conditions. Temperatures increase over the Canadian Archipelago, and also over Greenland (because of heat advection from this area). It is likely that if the model gets to simulate a seasonal snow cover over the Canadian Archipelago, we see much improved heat advection over Greenland. We are checking #286 at the moment. I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo, over which area is this? Thanks, Miren On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) > wrote: I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo On 6 Mar 2018, at 22:51, Jan Lenaerts > wrote: Hi all, This is potentially relevant for us as well. It could be that the Greenland tundra problem has been resolved (or will be in a transient simulation) because of bug found in the energy conservation over land (as far as I have tracked the recent meetings). Don?t know the exact details though. Cheers, Jan Begin forwarded message: From: David Lawrence > Subject: Re: [Cesm2control] 20th century from 280 Date: 6 March 2018 at 14:46:55 GMT-7 To: Cecile Hannay > Cc: cesm2control > I just checked and land is definitely fluxing water out as it comes to new equilibrium in all runs post-279. It looks like the big part of the flux is done after about 50 years so could restart 284/285 with land IC from the longest of the post-279 runs. http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png It looks like it is mostly from Canadian Archipelago. Dave On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay > wrote: Hi Alice, I was planningto extent 283 to 100 years in any case. What I meant was: should we wait to reach 100-year before starting the 20th century (283_20thC) ? Maybe we can start the 20th century anyway and if 283 freezes, we can decide what to do with the 20th century (to continue or to stop it). Cecile ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier > wrote: I think it would be good to keep 283 going longer as well. It would also let us know if Keith's hypothesis about an initial pulse of fresh water (from the 262c initial condition, pre land bug fix so with deeper snow) would maybe level out over time. -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay > wrote: Should I wait until 283 reaches 100 years before starting the 20 century. I can use the IC of 280 at year 60, but this is only a 60-year run. We have seen the Laborador Sea freezing after 60 years. A better threshold to decide whether a run will freeze or not was 100 years. ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay > wrote: Hi, Attached is a plot of annual Lab Sea IFRAC and surface salinity from recent runs. The 279+ runs are fresher than 266 in the Lab Sea. I wonder the land energy fix is melting the built up snow, and producing a pulse of runoff. Keith On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier > wrote: Alright everyone, I'm the bearer of bad news: In 284 the Lab Sea froze over and 285 looks like it is about to freeze over. For these reasons, Marika, Dave, and I are not in favor of continuing 284 and 285. There are just too many uncertainties about what in the ozone or gravity wave additions could be triggering the freezing. Julio thinks we're simulating anti-shortwave radiation. ;) On that note, 280 and 283 are pretty similar. Neither freezes over and both have thinner ice than 269. The southern hemispheres look pretty similar as well. It would be good to continue 283 out longer just to be sure and to see a 20th century off this run (as Cecile details). But for now our preference is definitely 283. Cheers, Alice -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay > wrote: Hi Gokhan, We talked about stating a 20th century from 280. I am thinking that I would prefer the code from 283 (instead of 280). The two simulations are very, very similar (for the AMWG diag). The code from 283 is "more bugfree". So basically, I would like to start 283_20thC and not 280_20thC. The reason we talked about starting 280 was that that run was longer and therefore more spunup (280 is 60 years while 283 is only 30 years). If we want to use the more spunup state, we could use the initial form the end of 280 to start 283_20thC. Thoughts ? ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control - - - - - - - - - - - - - - - - - - - - - - - - - - - Jan Lenaerts Assistant Professor Department of Atmospheric and Oceanic Sciences University of Colorado Boulder Mail: 311 UCB | Boulder, CO 80309-0311 Phone: +1-303-735-5471 Office: SEEC N257 @lenaertsjan || website _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: set2_JJA_H2OSNO.pdf Type: application/pdf Size: 746982 bytes Desc: set2_JJA_H2OSNO.pdf URL: From lipscomb at ucar.edu Wed Mar 14 08:47:10 2018 From: lipscomb at ucar.edu (William Lipscomb) Date: Wed, 14 Mar 2018 08:47:10 -0600 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: <9F98BBFD-826D-4C24-B626-87E08ED7F387@tudelft.nl> References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> <9F98BBFD-826D-4C24-B626-87E08ED7F387@tudelft.nl> Message-ID: Hi Miren, Thanks for the analysis. I was wondering: Do we have any analyses of climate runs where the snow cap is lower for non-glaciated regions? Are there physical or other reasons why this would not be a good idea? (I'm thinking only of runs without an evolving or interactive ice sheet.) Bill L. On Wed, Mar 14, 2018 at 8:30 AM, Miren Vizcaino wrote: > Hi, > > Raymond checked #286 and the snow cover is growing both in Canadian > Archipelago and Greenland, > > http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_ > control.all.286/lnd/b.e20.B1850.f09_g17.pi_control.all. > 286.11_20-b.e20.B1850.f09_g17.pi_control.all.280.11_20/set6/ > set6_landf_Greenland.png > > which in part relates to the fact that the snow has been reset in this > run, while it was not in #279 (see below) > > The run stops at year 20, does anybody know why? > > On a related topic to the Canadian Archipelago, Raymond did a sensitivity > run on climate impact of the repartition, and found that it increases the > permanent snow cover area in Canadian Archipelago and NW Canada (feedback > more snow, cooling, more snow), cools a large North American area and > Greenland by 1 K in the annual mean and 4 K in DJF, and results in changes > in the ITCZ (see plots below) > > If the snow cover over the Canadian Archipelago is such an issue at the > moment, there are two things that can be tried: > > - turning off the repartition > > - lowering the snow cap for non-glaciated regions > > At Delft we have a problem with the permanent snow cover over the Canadian > Archipielago, since it results in advected cold air over Greenland. > > Cheers, Miren > > > On Mar 12, 2018, at 12:35 PM, Miren Vizcaino > wrote: > > Hi All, > > Raymond and Laura checked 279 vs 266 and found that the snow cover is > going strongly down in the Canadian Archipelago, but not in Greenland > > > From Raymond: > > http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_ > control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all. > 279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html). > It does not look like the Greenland snow is much affected by this bug fix > as it produces similar amounts of runoff in both runs. > > However, it seems to have a high impact on the snow cover in Canadian > Arctic. The snow w.e. reduces from ~7200mm to ~4700mm over 40 years under > preindustrial conditions. > > Temperatures increase over the Canadian Archipelago, and also over > Greenland (because of heat advection from this area). It is likely that if > the model gets to simulate a seasonal snow cover over the Canadian > Archipelago, we see much improved heat advection over Greenland. > > We are checking #286 at the moment. > > I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 > years of runtime. > > > Leo, over which area is this? > > Thanks, Miren > > > > > On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) < > L.vanKampenhout at uu.nl> wrote: > > I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 > years of runtime. > > Leo > > > > On 6 Mar 2018, at 22:51, Jan Lenaerts wrote: > > Hi all, > > This is potentially relevant for us as well. It *could be *that the > Greenland tundra problem has been resolved (or will be in a transient > simulation) because of bug found in the energy conservation over land (as > far as I have tracked the recent meetings). Don?t know the exact details > though. > > Cheers, > > Jan > > > > > Begin forwarded message: > > *From: *David Lawrence > *Subject: **Re: [Cesm2control] 20th century from 280* > *Date: *6 March 2018 at 14:46:55 GMT-7 > *To: *Cecile Hannay > *Cc: *cesm2control > > I just checked and land is definitely fluxing water out as it comes to new > equilibrium in all runs post-279. It looks like the big part of the flux > is done after about 50 years so could restart 284/285 with land IC from the > longest of the post-279 runs. > > http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_ > control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all. > 279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/ > set6_hydro_Polar.png > > http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_ > control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all. > 279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/ > set6_snowliqIce_Polar.png > > It looks like it is mostly from Canadian Archipelago. > > Dave > > > > On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay wrote: > > Hi Alice, > I was planningto extent 283 to 100 years in any case. What I meant was: > should we wait to reach 100-year before starting the 20th century > (283_20thC) ? > Maybe we can start the 20th century anyway and if 283 freezes, we can > decide what to do with the 20th century (to continue or to stop it). > Cecile > > > > ++++++++++++++++++++++++++++++++++++++++++++ > Cecile Hannay > National Center for Atmospheric Research > email: hannay at ucar.edu > phone: 303-497-1327 <(303)%20497-1327> > webpage: http://www.cgd.ucar.edu/staff/hannay/ > ++++++++++++++++++++++++++++++++++++++++++++ > > > On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier wrote: > > I think it would be good to keep 283 going longer as well. It would also > let us know if Keith's hypothesis about an initial pulse of fresh water > (from the 262c initial condition, pre land bug fix so with deeper snow) > would maybe level out over time. > > -------------------------------------------------------- > Alice K. DuVivier > email: duvivier at ucar.edu > phone: 303-497-1786 <(303)%20497-1786> > > Associate Scientist II > National Center for Atmospheric Research > Climate and Global Dynamics Laboratory > PO Box 3000, Boulder, CO 80307-3000 > -------------------------------------------------------- > > > On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay wrote: > > Should I wait until 283 reaches 100 years before starting the 20 century. > I can use the IC of 280 at year 60, but this is only a 60-year run. We > have seen the Laborador Sea freezing after 60 years. A better threshold to > decide whether a run will freeze or not was 100 years. > > > ++++++++++++++++++++++++++++++++++++++++++++ > Cecile Hannay > National Center for Atmospheric Research > email: hannay at ucar.edu > phone: 303-497-1327 <(303)%20497-1327> > webpage: http://www.cgd.ucar.edu/staff/hannay/ > ++++++++++++++++++++++++++++++++++++++++++++ > > > On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay wrote: > > Hi, > > Attached is a plot of annual Lab Sea IFRAC and surface salinity from > recent runs. > > The 279+ runs are fresher than 266 in the Lab Sea. > I wonder the land energy fix is melting the built up snow, > and producing a pulse of runoff. > > Keith > > On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier wrote: > > Alright everyone, I'm the bearer of bad news: > In 284 > > the Lab Sea froze over and 285 > > looks like it is about to freeze over. For these reasons, Marika, Dave, and > I are not in favor of continuing 284 and 285. There are just too many > uncertainties about what in the ozone or gravity wave additions could be > triggering the freezing. Julio thinks we're simulating anti-shortwave > radiation. ;) > > On that note, 280 and 283 are pretty similar. Neither freezes over and > both have thinner ice than 269. The southern hemispheres look pretty > similar as well. It would be good to continue 283 out longer just to be > sure and to see a 20th century off this run (as Cecile details). But for > now our preference is definitely 283. > > Cheers, > Alice > > > -------------------------------------------------------- > Alice K. DuVivier > email: duvivier at ucar.edu > phone: 303-497-1786 <(303)%20497-1786> > > Associate Scientist II > National Center for Atmospheric Research > Climate and Global Dynamics Laboratory > PO Box 3000, Boulder, CO 80307-3000 > -------------------------------------------------------- > > > On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay wrote: > > Hi Gokhan, > > We talked about stating a 20th century from 280. > > I am thinking that I would prefer the code from 283 (instead of 280). The > two simulations are very, very similar (for the AMWG diag). The code from > 283 is "more bugfree". > > So basically, I would like to start 283_20thC and not 280_20thC. The > reason we talked about starting 280 was that that run was longer and > therefore more spunup (280 is 60 years while 283 is only 30 years). > If we want to use the more spunup state, we could use the initial form the > end of 280 to start 283_20thC. > > Thoughts ? > > ++++++++++++++++++++++++++++++++++++++++++++ > Cecile Hannay > National Center for Atmospheric Research > email: hannay at ucar.edu > phone: 303-497-1327 <(303)%20497-1327> > webpage: http://www.cgd.ucar.edu/staff/hannay/ > ++++++++++++++++++++++++++++++++++++++++++++ > > > _______________________________________________ > Cesm2control mailing list > Cesm2control at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control > > > > _______________________________________________ > Cesm2control mailing list > Cesm2control at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control > > > > > > > _______________________________________________ > Cesm2control mailing list > Cesm2control at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control > > > _______________________________________________ > Cesm2control mailing list > Cesm2control at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control > > > - - - - - - - - - - - - - - - - - - - - - - - - - - - > > *Jan Lenaerts *Assistant Professor > Department of Atmospheric and Oceanic Sciences > University of Colorado Boulder > Mail: 311 UCB | Boulder, CO 80309-0311 > Phone: +1-303-735-5471 <(303)%20735-5471> > Office: SEEC N257 > @lenaertsjan || website > > > > > > > > > > > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > -- William Lipscomb Climate & Global Dynamics National Center for Atmospheric Research 1850 Table Mesa Drive Boulder, CO 80305 (505) 699-8016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Wed Mar 14 09:39:30 2018 From: sacks at ucar.edu (Bill Sacks) Date: Wed, 14 Mar 2018 09:39:30 -0600 Subject: [Liwg-core] Next LIWG-Core meeting Message-ID: <5AA94232.2030005@ucar.edu> Hi all, For those who missed today's call, there are some notes here: https://docs.google.com/document/d/1jJ_-6PNJ3hkB2xILmfaTpDxt43PhfZIpp7CvJjIqTbk/edit There is an Arctic meeting at NCAR during our normal April call time (April 11). So our next scheduled telecon will be Wednesday, May 9. If Bill L, Jan or others feel there's a need for a call before then, we'll schedule one as needed. Bill S From sacks at ucar.edu Wed Mar 14 10:09:07 2018 From: sacks at ucar.edu (Bill Sacks) Date: Wed, 14 Mar 2018 10:09:07 -0600 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> <9F98BBFD-826D-4C24-B626-87E08ED7F387@tudelft.nl> Message-ID: <5AA94923.5010903@ucar.edu> In response to Bill L's question: > Are there physical or other reasons why this would not be a good > idea? (I'm thinking only of runs without an evolving or interactive > ice sheet.) We've discussed this at least once, and I think a few times over the last few years, and have decided that it's best to keep things consistent. CLM tries to keep physics consistent from one region to another, as much as possible. Similarly, the philosophy (which admittedly is not always followed, but I believe it's still a guiding principle) is that different landunits should have the same physics as much as possible, unless there's a good reason why they cannot / should not. We've needed to introduce some differences between ice sheets and mountain glaciers over the last couple of years, but this has always been done with reluctance. One big problem with making things inconsistent is that you are no longer allowing properties to emerge from the model, but instead are forcing the model to behave the way you want it to. These kinds of differences are especially problematic for members of the community who are less in the know about all of the details of the model. We've seen how much trouble it causes when vegetated landunits behave differently from glacier landunits in some respects (analyses members of this group have done trying to understand behavior of the tundra), and I've been pushing to remove these differences as much as possible. Bill S On 3/14/18, 8:47 AM, William Lipscomb wrote: > Hi Miren, > > Thanks for the analysis. I was wondering: Do we have any analyses of > climate runs where the snow cap is lower for non-glaciated regions? > Are there physical or other reasons why this would not be a good > idea? (I'm thinking only of runs without an evolving or interactive > ice sheet.) > > Bill L. > > On Wed, Mar 14, 2018 at 8:30 AM, Miren Vizcaino > wrote: > > Hi, > > Raymond checked #286 and the snow cover is growing both in > Canadian Archipelago and Greenland, > > http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.286/lnd/b.e20.B1850.f09_g17.pi_control.all.286.11_20-b.e20.B1850.f09_g17.pi_control.all.280.11_20/set6/set6_landf_Greenland.png > > > which in part relates to the fact that the snow has been reset in > this run, while it was not in #279 (see below) > > The run stops at year 20, does anybody know why? > > On a related topic to the Canadian Archipelago, Raymond did a > sensitivity run on climate impact of the repartition, and found > that it increases the permanent snow cover area in Canadian > Archipelago and NW Canada (feedback more snow, cooling, more > snow), cools a large North American area and Greenland by 1 K in > the annual mean and 4 K in DJF, and results in changes in the ITCZ > (see plots below) > > If the snow cover over the Canadian Archipelago is such an issue > at the moment, there are two things that can be tried: > > - turning off the repartition > > - lowering the snow cap for non-glaciated regions > > At Delft we have a problem with the permanent snow cover over the > Canadian Archipielago, since it results in advected cold air over > Greenland. > > Cheers, Miren > > >> On Mar 12, 2018, at 12:35 PM, Miren Vizcaino >> > wrote: >> >> Hi All, >> >> Raymond and Laura checked 279 vs 266 and found that the snow >> cover is going strongly down in the Canadian Archipelago, but not >> in Greenland >> >> >> From Raymond: >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html >> ). >> It does not look like the Greenland snow is much affected by this >> bug fix as it produces similar amounts of runoff in both runs. >> >> However, it seems to have a high impact on the snow cover in >> Canadian Arctic. The snow w.e. reduces from ~7200mm to ~4700mm >> over 40 years under preindustrial conditions. >> >> Temperatures increase over the Canadian Archipelago, and also >> over Greenland (because of heat advection from this area). It is >> likely that if the model gets to simulate a seasonal snow cover >> over the Canadian Archipelago, we see much improved heat >> advection over Greenland. >> >> We are checking #286 at the moment. >> >>> I just checked H2OSNO in #286 and it has grown > 1 m in August , >>> after 20 years of runtime. >> >> Leo, over which area is this? >> >> Thanks, Miren >> >> >> >> >>> On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) >>> > wrote: >>> >>> I just checked H2OSNO in #286 and it has grown > 1 m in August , >>> after 20 years of runtime. >>> >>> Leo >>> >>> >>> >>>> On 6 Mar 2018, at 22:51, Jan Lenaerts >>>> > >>>> wrote: >>>> >>>> Hi all, >>>> >>>> This is potentially relevant for us as well. It *could be *that >>>> the Greenland tundra problem has been resolved (or will be in a >>>> transient simulation) because of bug found in the energy >>>> conservation over land (as far as I have tracked the recent >>>> meetings). Don?t know the exact details though. >>>> >>>> Cheers, >>>> >>>> Jan >>>> >>>> >>>> >>>> >>>>> Begin forwarded message: >>>>> >>>>> *From: *David Lawrence >>>> > >>>>> *Subject: **Re: [Cesm2control] 20th century from 280* >>>>> *Date: *6 March 2018 at 14:46:55 GMT-7 >>>>> *To: *Cecile Hannay > >>>>> *Cc: *cesm2control >>>> > >>>>> >>>>> I just checked and land is definitely fluxing water out as it >>>>> comes to new equilibrium in all runs post-279. It looks like >>>>> the big part of the flux is done after about 50 years so could >>>>> restart 284/285 with land IC from the longest of the post-279 >>>>> runs. >>>>> >>>>> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png >>>>> >>>>> >>>>> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png >>>>> >>>>> >>>>> It looks like it is mostly from Canadian Archipelago. >>>>> >>>>> Dave >>>>> >>>>> >>>>> >>>>> On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay >>>> > wrote: >>>>> >>>>> Hi Alice, >>>>> I was planningto extent 283 to 100 years in any case. What >>>>> I meant was: should we wait to reach 100-year before >>>>> starting the 20th century (283_20thC) ? >>>>> Maybe we can start the 20th century anyway and if 283 >>>>> freezes, we can decide what to do with the 20th century >>>>> (to continue or to stop it). >>>>> Cecile >>>>> >>>>> >>>>> >>>>> ++++++++++++++++++++++++++++++++++++++++++++ >>>>> Cecile Hannay >>>>> National Center for Atmospheric Research >>>>> email: hannay at ucar.edu >>>>> phone: 303-497-1327 >>>>> webpage: http://www.cgd.ucar.edu/staff/hannay/ >>>>> >>>>> ++++++++++++++++++++++++++++++++++++++++++++ >>>>> >>>>> >>>>> On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier >>>>> > wrote: >>>>> >>>>> I think it would be good to keep 283 going longer as >>>>> well. It would also let us know if Keith's hypothesis >>>>> about an initial pulse of fresh water (from the 262c >>>>> initial condition, pre land bug fix so with deeper >>>>> snow) would maybe level out over time. >>>>> >>>>> -------------------------------------------------------- >>>>> Alice K. DuVivier >>>>> email: duvivier at ucar.edu >>>>> phone: 303-497-1786 >>>>> >>>>> Associate Scientist II >>>>> National Center for Atmospheric Research >>>>> Climate and Global Dynamics Laboratory >>>>> PO Box 3000, Boulder, CO 80307-3000 >>>>> -------------------------------------------------------- >>>>> >>>>> >>>>> On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay >>>>> > wrote: >>>>> >>>>> Should I wait until 283 reaches 100 years before >>>>> starting the 20 century. >>>>> I can use the IC of 280 at year 60, but this is >>>>> only a 60-year run. We have seen the Laborador Sea >>>>> freezing after 60 years. A better threshold to >>>>> decide whether a run will freeze or not was 100 >>>>> years. >>>>> >>>>> >>>>> ++++++++++++++++++++++++++++++++++++++++++++ >>>>> Cecile Hannay >>>>> National Center for Atmospheric Research >>>>> email: hannay at ucar.edu >>>>> phone: 303-497-1327 >>>>> webpage: http://www.cgd.ucar.edu/staff/hannay/ >>>>> >>>>> ++++++++++++++++++++++++++++++++++++++++++++ >>>>> >>>>> >>>>> On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay >>>>> > wrote: >>>>> >>>>> Hi, >>>>> >>>>> Attached is a plot of annual Lab Sea IFRAC and >>>>> surface salinity from recent runs. >>>>> >>>>> The 279+ runs are fresher than 266 in the Lab Sea. >>>>> I wonder the land energy fix is melting the >>>>> built up snow, >>>>> and producing a pulse of runoff. >>>>> >>>>> Keith >>>>> >>>>> On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier >>>>> > >>>>> wrote: >>>>> >>>>> Alright everyone, I'm the bearer of bad news: >>>>> In 284 >>>>> >>>>> the Lab Sea froze over and 285 >>>>> >>>>> looks like it is about to freeze over. For >>>>> these reasons, Marika, Dave, and I are not >>>>> in favor of continuing 284 and 285. There >>>>> are just too many uncertainties about what >>>>> in the ozone or gravity wave additions >>>>> could be triggering the freezing. Julio >>>>> thinks we're simulating anti-shortwave >>>>> radiation. ;) >>>>> >>>>> On that note, 280 and 283 are pretty >>>>> similar. Neither freezes over and both >>>>> have thinner ice than 269. The southern >>>>> hemispheres look pretty similar as well. >>>>> It would be good to continue 283 out >>>>> longer just to be sure and to see a 20th >>>>> century off this run (as Cecile details). >>>>> But for now our preference is definitely 283. >>>>> >>>>> Cheers, >>>>> Alice >>>>> >>>>> >>>>> -------------------------------------------------------- >>>>> >>>>> Alice K. DuVivier >>>>> email: duvivier at ucar.edu >>>>> >>>>> phone: 303-497-1786 >>>>> >>>>> Associate Scientist II >>>>> National Center for Atmospheric Research >>>>> Climate and Global Dynamics Laboratory >>>>> PO Box 3000, Boulder, CO 80307-3000 >>>>> -------------------------------------------------------- >>>>> >>>>> >>>>> On Tue, Mar 6, 2018 at 11:52 AM, Cecile >>>>> Hannay >>>> > wrote: >>>>> >>>>> Hi Gokhan, >>>>> >>>>> We talked about stating a 20th century >>>>> from 280. >>>>> >>>>> I am thinking that I would prefer the >>>>> code from 283 (instead of 280). The >>>>> two simulations are very, very similar >>>>> (for the AMWG diag). The code from 283 >>>>> is "more bugfree". >>>>> >>>>> So basically, I would like to start >>>>> 283_20thC and not 280_20thC. The >>>>> reason we talked about starting 280 >>>>> was that that run was longer and >>>>> therefore more spunup (280 is 60 years >>>>> while 283 is only 30 years). >>>>> If we want to use the more spunup >>>>> state, we could use the initial form >>>>> the end of 280 to start 283_20thC. >>>>> >>>>> Thoughts ? >>>>> >>>>> ++++++++++++++++++++++++++++++++++++++++++++ >>>>> >>>>> Cecile Hannay >>>>> National Center for Atmospheric Research >>>>> email: hannay at ucar.edu >>>>> >>>>> phone: 303-497-1327 >>>>> >>>>> webpage: >>>>> http://www.cgd.ucar.edu/staff/hannay/ >>>>> >>>>> ++++++++++++++++++++++++++++++++++++++++++++ >>>>> >>>>> >>>>> _______________________________________________ >>>>> Cesm2control mailing list >>>>> Cesm2control at cgd.ucar.edu >>>>> >>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Cesm2control mailing list >>>>> Cesm2control at cgd.ucar.edu >>>>> >>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Cesm2control mailing list >>>>> Cesm2control at cgd.ucar.edu >>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Cesm2control mailing list >>>>> Cesm2control at cgd.ucar.edu >>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>>>> >>>> >>>> - - - - - - - - - - - - - - - - - - - - - - - - - - - >>>> *Jan Lenaerts >>>> *Assistant Professor >>>> Department of Atmospheric and Oceanic Sciences >>>> University of Colorado Boulder >>>> Mail: 311 UCB | Boulder, CO 80309-0311 >>>> Phone: +1-303-735-5471 >>>> Office: SEEC N257 >>>> @lenaertsjan || website >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Liwg-core mailing list >>>> Liwg-core at cgd.ucar.edu >>>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>>> >>> >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >> >> _______________________________________________ >> Liwg-core mailing list >> Liwg-core at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > > > > -- > William Lipscomb > Climate & Global Dynamics > National Center for Atmospheric Research > 1850 Table Mesa Drive > Boulder, CO 80305 > (505) 699-8016 > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From lipscomb at ucar.edu Wed Mar 14 10:24:26 2018 From: lipscomb at ucar.edu (William Lipscomb) Date: Wed, 14 Mar 2018 10:24:26 -0600 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: <5AA94923.5010903@ucar.edu> References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> <9F98BBFD-826D-4C24-B626-87E08ED7F387@tudelft.nl> <5AA94923.5010903@ucar.edu> Message-ID: Hi Bill, Thanks, that makes sense. I think it's good for LIWG users to know that this is a potential tuning knob (in case it would reduce some biases and improve their science), but I agree that it's problematic for non-expert users to have different physics in different landunits. Bill L. On Wed, Mar 14, 2018 at 10:09 AM, Bill Sacks wrote: > In response to Bill L's question: > > Are there physical or other reasons why this would not be a good idea? > (I'm thinking only of runs without an evolving or interactive ice sheet.) > > We've discussed this at least once, and I think a few times over the last > few years, and have decided that it's best to keep things consistent. CLM > tries to keep physics consistent from one region to another, as much as > possible. Similarly, the philosophy (which admittedly is not always > followed, but I believe it's still a guiding principle) is that different > landunits should have the same physics as much as possible, unless there's > a good reason why they cannot / should not. We've needed to introduce some > differences between ice sheets and mountain glaciers over the last couple > of years, but this has always been done with reluctance. One big problem > with making things inconsistent is that you are no longer allowing > properties to emerge from the model, but instead are forcing the model to > behave the way you want it to. > > These kinds of differences are especially problematic for members of the > community who are less in the know about all of the details of the model. > We've seen how much trouble it causes when vegetated landunits behave > differently from glacier landunits in some respects (analyses members of > this group have done trying to understand behavior of the tundra), and I've > been pushing to remove these differences as much as possible. > > Bill S > > > On 3/14/18, 8:47 AM, William Lipscomb wrote: > > Hi Miren, > > Thanks for the analysis. I was wondering: Do we have any analyses of > climate runs where the snow cap is lower for non-glaciated regions? Are > there physical or other reasons why this would not be a good idea? (I'm > thinking only of runs without an evolving or interactive ice sheet.) > > Bill L. > > On Wed, Mar 14, 2018 at 8:30 AM, Miren Vizcaino > wrote: > >> Hi, >> >> Raymond checked #286 and the snow cover is growing both in Canadian >> Archipelago and Greenland, >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >> rol.all.286/lnd/b.e20.B1850.f09_g17.pi_control.all.286.11_ >> 20-b.e20.B1850.f09_g17.pi_control.all.280.11_20/set6/set >> 6_landf_Greenland.png >> >> which in part relates to the fact that the snow has been reset in this >> run, while it was not in #279 (see below) >> >> The run stops at year 20, does anybody know why? >> >> On a related topic to the Canadian Archipelago, Raymond did a >> sensitivity run on climate impact of the repartition, and found that it >> increases the permanent snow cover area in Canadian Archipelago and NW >> Canada (feedback more snow, cooling, more snow), cools a large North >> American area and Greenland by 1 K in the annual mean and 4 K in DJF, and >> results in changes in the ITCZ (see plots below) >> >> If the snow cover over the Canadian Archipelago is such an issue at the >> moment, there are two things that can be tried: >> >> - turning off the repartition >> >> - lowering the snow cap for non-glaciated regions >> >> At Delft we have a problem with the permanent snow cover over the >> Canadian Archipielago, since it results in advected cold air over >> Greenland. >> >> Cheers, Miren >> >> >> On Mar 12, 2018, at 12:35 PM, Miren Vizcaino >> wrote: >> >> Hi All, >> >> Raymond and Laura checked 279 vs 266 and found that the snow cover is >> going strongly down in the Canadian Archipelago, but not in Greenland >> >> >> From Raymond: >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >> rol.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_ >> 40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html). It does >> not look like the Greenland snow is much affected by this bug fix as it >> produces similar amounts of runoff in both runs. >> >> However, it seems to have a high impact on the snow cover in Canadian >> Arctic. The snow w.e. reduces from ~7200mm to ~4700mm over 40 years under >> preindustrial conditions. >> >> Temperatures increase over the Canadian Archipelago, and also over >> Greenland (because of heat advection from this area). It is likely that if >> the model gets to simulate a seasonal snow cover over the Canadian >> Archipelago, we see much improved heat advection over Greenland. >> >> We are checking #286 at the moment. >> >> I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 >> years of runtime. >> >> >> Leo, over which area is this? >> >> Thanks, Miren >> >> >> >> >> On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) < >> L.vanKampenhout at uu.nl> wrote: >> >> I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 >> years of runtime. >> >> Leo >> >> >> >> On 6 Mar 2018, at 22:51, Jan Lenaerts wrote: >> >> Hi all, >> >> This is potentially relevant for us as well. It *could be *that the >> Greenland tundra problem has been resolved (or will be in a transient >> simulation) because of bug found in the energy conservation over land (as >> far as I have tracked the recent meetings). Don?t know the exact details >> though. >> >> Cheers, >> >> Jan >> >> >> >> >> Begin forwarded message: >> >> *From: *David Lawrence >> *Subject: **Re: [Cesm2control] 20th century from 280* >> *Date: *6 March 2018 at 14:46:55 GMT-7 >> *To: *Cecile Hannay >> *Cc: *cesm2control >> >> I just checked and land is definitely fluxing water out as it comes to >> new equilibrium in all runs post-279. It looks like the big part of the >> flux is done after about 50 years so could restart 284/285 with land IC >> from the longest of the post-279 runs. >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >> rol.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_ >> 40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png >> >> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >> rol.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_ >> 40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set >> 6_snowliqIce_Polar.png >> >> It looks like it is mostly from Canadian Archipelago. >> >> Dave >> >> >> >> On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay wrote: >> >> Hi Alice, >> I was planningto extent 283 to 100 years in any case. What I meant was: >> should we wait to reach 100-year before starting the 20th century >> (283_20thC) ? >> Maybe we can start the 20th century anyway and if 283 freezes, we can >> decide what to do with the 20th century (to continue or to stop it). >> Cecile >> >> >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> Cecile Hannay >> National Center for Atmospheric Research >> email: hannay at ucar.edu >> phone: 303-497-1327 <%28303%29%20497-1327> >> webpage: http://www.cgd.ucar.edu/staff/hannay/ >> ++++++++++++++++++++++++++++++++++++++++++++ >> >> >> On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier wrote: >> >> I think it would be good to keep 283 going longer as well. It would also >> let us know if Keith's hypothesis about an initial pulse of fresh water >> (from the 262c initial condition, pre land bug fix so with deeper snow) >> would maybe level out over time. >> >> -------------------------------------------------------- >> Alice K. DuVivier >> email: duvivier at ucar.edu >> phone: 303-497-1786 <%28303%29%20497-1786> >> >> Associate Scientist II >> National Center for Atmospheric Research >> Climate and Global Dynamics Laboratory >> PO Box 3000, Boulder, CO 80307-3000 >> -------------------------------------------------------- >> >> >> On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay wrote: >> >> Should I wait until 283 reaches 100 years before starting the 20 century. >> I can use the IC of 280 at year 60, but this is only a 60-year run. We >> have seen the Laborador Sea freezing after 60 years. A better threshold to >> decide whether a run will freeze or not was 100 years. >> >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> Cecile Hannay >> National Center for Atmospheric Research >> email: hannay at ucar.edu >> phone: 303-497-1327 <%28303%29%20497-1327> >> webpage: http://www.cgd.ucar.edu/staff/hannay/ >> ++++++++++++++++++++++++++++++++++++++++++++ >> >> >> On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay wrote: >> >> Hi, >> >> Attached is a plot of annual Lab Sea IFRAC and surface salinity from >> recent runs. >> >> The 279+ runs are fresher than 266 in the Lab Sea. >> I wonder the land energy fix is melting the built up snow, >> and producing a pulse of runoff. >> >> Keith >> >> On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier wrote: >> >> Alright everyone, I'm the bearer of bad news: >> In 284 >> >> the Lab Sea froze over and 285 >> >> looks like it is about to freeze over. For these reasons, Marika, Dave, and >> I are not in favor of continuing 284 and 285. There are just too many >> uncertainties about what in the ozone or gravity wave additions could be >> triggering the freezing. Julio thinks we're simulating anti-shortwave >> radiation. ;) >> >> On that note, 280 and 283 are pretty similar. Neither freezes over and >> both have thinner ice than 269. The southern hemispheres look pretty >> similar as well. It would be good to continue 283 out longer just to be >> sure and to see a 20th century off this run (as Cecile details). But for >> now our preference is definitely 283. >> >> Cheers, >> Alice >> >> >> -------------------------------------------------------- >> Alice K. DuVivier >> email: duvivier at ucar.edu >> phone: 303-497-1786 <%28303%29%20497-1786> >> >> Associate Scientist II >> National Center for Atmospheric Research >> Climate and Global Dynamics Laboratory >> PO Box 3000, Boulder, CO 80307-3000 >> -------------------------------------------------------- >> >> >> On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay wrote: >> >> Hi Gokhan, >> >> We talked about stating a 20th century from 280. >> >> I am thinking that I would prefer the code from 283 (instead of 280). The >> two simulations are very, very similar (for the AMWG diag). The code from >> 283 is "more bugfree". >> >> So basically, I would like to start 283_20thC and not 280_20thC. The >> reason we talked about starting 280 was that that run was longer and >> therefore more spunup (280 is 60 years while 283 is only 30 years). >> If we want to use the more spunup state, we could use the initial form >> the end of 280 to start 283_20thC. >> >> Thoughts ? >> >> ++++++++++++++++++++++++++++++++++++++++++++ >> Cecile Hannay >> National Center for Atmospheric Research >> email: hannay at ucar.edu >> phone: 303-497-1327 <%28303%29%20497-1327> >> webpage: http://www.cgd.ucar.edu/staff/hannay/ >> ++++++++++++++++++++++++++++++++++++++++++++ >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> >> >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> _______________________________________________ >> Cesm2control mailing list >> Cesm2control at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >> >> >> - - - - - - - - - - - - - - - - - - - - - - - - - - - >> >> *Jan Lenaerts *Assistant Professor >> Department of Atmospheric and Oceanic Sciences >> University of Colorado Boulder >> Mail: 311 UCB | Boulder, CO 80309-0311 >> Phone: +1-303-735-5471 <%28303%29%20735-5471> >> Office: SEEC N257 >> @lenaertsjan || website >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> Liwg-core mailing list >> Liwg-core at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> >> >> _______________________________________________ >> Liwg-core mailing list >> Liwg-core at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> >> >> _______________________________________________ >> Liwg-core mailing list >> Liwg-core at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> >> >> >> _______________________________________________ >> Liwg-core mailing list >> Liwg-core at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> >> > > > -- > William Lipscomb > Climate & Global Dynamics > National Center for Atmospheric Research > 1850 Table Mesa Drive > Boulder, CO 80305 > > (505) 699-8016 > > _______________________________________________ > Liwg-core mailing listLiwg-core at cgd.ucar.eduhttp://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > -- William Lipscomb Climate & Global Dynamics National Center for Atmospheric Research 1850 Table Mesa Drive Boulder, CO 80305 (505) 699-8016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Vizcaino at tudelft.nl Wed Mar 14 14:50:39 2018 From: M.Vizcaino at tudelft.nl (Miren Vizcaino) Date: Wed, 14 Mar 2018 20:50:39 +0000 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> <9F98BBFD-826D-4C24-B626-87E08ED7F387@tudelft.nl> <5AA94923.5010903@ucar.edu> Message-ID: <9773F252-BF98-4C82-A829-B39130D1759A@tudelft.nl> I am surprised that nobody has reacted on the strong model sensitivity to turning off the repartition, which goes in the right direction to remove the biased permanent snow cover in the Canadian Archipelago (page 1 of the PDF). Miren On 14 Mar 2018, at 17:24, William Lipscomb > wrote: Hi Bill, Thanks, that makes sense. I think it's good for LIWG users to know that this is a potential tuning knob (in case it would reduce some biases and improve their science), but I agree that it's problematic for non-expert users to have different physics in different landunits. Bill L. On Wed, Mar 14, 2018 at 10:09 AM, Bill Sacks > wrote: In response to Bill L's question: Are there physical or other reasons why this would not be a good idea? (I'm thinking only of runs without an evolving or interactive ice sheet.) We've discussed this at least once, and I think a few times over the last few years, and have decided that it's best to keep things consistent. CLM tries to keep physics consistent from one region to another, as much as possible. Similarly, the philosophy (which admittedly is not always followed, but I believe it's still a guiding principle) is that different landunits should have the same physics as much as possible, unless there's a good reason why they cannot / should not. We've needed to introduce some differences between ice sheets and mountain glaciers over the last couple of years, but this has always been done with reluctance. One big problem with making things inconsistent is that you are no longer allowing properties to emerge from the model, but instead are forcing the model to behave the way you want it to. These kinds of differences are especially problematic for members of the community who are less in the know about all of the details of the model. We've seen how much trouble it causes when vegetated landunits behave differently from glacier landunits in some respects (analyses members of this group have done trying to understand behavior of the tundra), and I've been pushing to remove these differences as much as possible. Bill S On 3/14/18, 8:47 AM, William Lipscomb wrote: Hi Miren, Thanks for the analysis. I was wondering: Do we have any analyses of climate runs where the snow cap is lower for non-glaciated regions? Are there physical or other reasons why this would not be a good idea? (I'm thinking only of runs without an evolving or interactive ice sheet.) Bill L. On Wed, Mar 14, 2018 at 8:30 AM, Miren Vizcaino > wrote: Hi, Raymond checked #286 and the snow cover is growing both in Canadian Archipelago and Greenland, http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.286/lnd/b.e20.B1850.f09_g17.pi_control.all.286.11_20-b.e20.B1850.f09_g17.pi_control.all.280.11_20/set6/set6_landf_Greenland.png which in part relates to the fact that the snow has been reset in this run, while it was not in #279 (see below) The run stops at year 20, does anybody know why? On a related topic to the Canadian Archipelago, Raymond did a sensitivity run on climate impact of the repartition, and found that it increases the permanent snow cover area in Canadian Archipelago and NW Canada (feedback more snow, cooling, more snow), cools a large North American area and Greenland by 1 K in the annual mean and 4 K in DJF, and results in changes in the ITCZ (see plots below) If the snow cover over the Canadian Archipelago is such an issue at the moment, there are two things that can be tried: - turning off the repartition - lowering the snow cap for non-glaciated regions At Delft we have a problem with the permanent snow cover over the Canadian Archipielago, since it results in advected cold air over Greenland. Cheers, Miren On Mar 12, 2018, at 12:35 PM, Miren Vizcaino > wrote: Hi All, Raymond and Laura checked 279 vs 266 and found that the snow cover is going strongly down in the Canadian Archipelago, but not in Greenland From Raymond: http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html). It does not look like the Greenland snow is much affected by this bug fix as it produces similar amounts of runoff in both runs. However, it seems to have a high impact on the snow cover in Canadian Arctic. The snow w.e. reduces from ~7200mm to ~4700mm over 40 years under preindustrial conditions. Temperatures increase over the Canadian Archipelago, and also over Greenland (because of heat advection from this area). It is likely that if the model gets to simulate a seasonal snow cover over the Canadian Archipelago, we see much improved heat advection over Greenland. We are checking #286 at the moment. I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo, over which area is this? Thanks, Miren On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) > wrote: I just checked H2OSNO in #286 and it has grown > 1 m in August , after 20 years of runtime. Leo On 6 Mar 2018, at 22:51, Jan Lenaerts > wrote: Hi all, This is potentially relevant for us as well. It could be that the Greenland tundra problem has been resolved (or will be in a transient simulation) because of bug found in the energy conservation over land (as far as I have tracked the recent meetings). Don?t know the exact details though. Cheers, Jan Begin forwarded message: From: David Lawrence > Subject: Re: [Cesm2control] 20th century from 280 Date: 6 March 2018 at 14:46:55 GMT-7 To: Cecile Hannay > Cc: cesm2control > I just checked and land is definitely fluxing water out as it comes to new equilibrium in all runs post-279. It looks like the big part of the flux is done after about 50 years so could restart 284/285 with land IC from the longest of the post-279 runs. http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_control.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40-b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_snowliqIce_Polar.png It looks like it is mostly from Canadian Archipelago. Dave On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay > wrote: Hi Alice, I was planningto extent 283 to 100 years in any case. What I meant was: should we wait to reach 100-year before starting the 20th century (283_20thC) ? Maybe we can start the 20th century anyway and if 283 freezes, we can decide what to do with the 20th century (to continue or to stop it). Cecile ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier > wrote: I think it would be good to keep 283 going longer as well. It would also let us know if Keith's hypothesis about an initial pulse of fresh water (from the 262c initial condition, pre land bug fix so with deeper snow) would maybe level out over time. -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay > wrote: Should I wait until 283 reaches 100 years before starting the 20 century. I can use the IC of 280 at year 60, but this is only a 60-year run. We have seen the Laborador Sea freezing after 60 years. A better threshold to decide whether a run will freeze or not was 100 years. ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay > wrote: Hi, Attached is a plot of annual Lab Sea IFRAC and surface salinity from recent runs. The 279+ runs are fresher than 266 in the Lab Sea. I wonder the land energy fix is melting the built up snow, and producing a pulse of runoff. Keith On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier > wrote: Alright everyone, I'm the bearer of bad news: In 284 the Lab Sea froze over and 285 looks like it is about to freeze over. For these reasons, Marika, Dave, and I are not in favor of continuing 284 and 285. There are just too many uncertainties about what in the ozone or gravity wave additions could be triggering the freezing. Julio thinks we're simulating anti-shortwave radiation. ;) On that note, 280 and 283 are pretty similar. Neither freezes over and both have thinner ice than 269. The southern hemispheres look pretty similar as well. It would be good to continue 283 out longer just to be sure and to see a 20th century off this run (as Cecile details). But for now our preference is definitely 283. Cheers, Alice -------------------------------------------------------- Alice K. DuVivier email: duvivier at ucar.edu phone: 303-497-1786 Associate Scientist II National Center for Atmospheric Research Climate and Global Dynamics Laboratory PO Box 3000, Boulder, CO 80307-3000 -------------------------------------------------------- On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay > wrote: Hi Gokhan, We talked about stating a 20th century from 280. I am thinking that I would prefer the code from 283 (instead of 280). The two simulations are very, very similar (for the AMWG diag). The code from 283 is "more bugfree". So basically, I would like to start 283_20thC and not 280_20thC. The reason we talked about starting 280 was that that run was longer and therefore more spunup (280 is 60 years while 283 is only 30 years). If we want to use the more spunup state, we could use the initial form the end of 280 to start 283_20thC. Thoughts ? ++++++++++++++++++++++++++++++++++++++++++++ Cecile Hannay National Center for Atmospheric Research email: hannay at ucar.edu phone: 303-497-1327 webpage: http://www.cgd.ucar.edu/staff/hannay/ ++++++++++++++++++++++++++++++++++++++++++++ _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control _______________________________________________ Cesm2control mailing list Cesm2control at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control - - - - - - - - - - - - - - - - - - - - - - - - - - - Jan Lenaerts Assistant Professor Department of Atmospheric and Oceanic Sciences University of Colorado Boulder Mail: 311 UCB | Boulder, CO 80309-0311 Phone: +1-303-735-5471 Office: SEEC N257 @lenaertsjan || website _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -- William Lipscomb Climate & Global Dynamics National Center for Atmospheric Research 1850 Table Mesa Drive Boulder, CO 80305 (505) 699-8016 _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -- William Lipscomb Climate & Global Dynamics National Center for Atmospheric Research 1850 Table Mesa Drive Boulder, CO 80305 (505) 699-8016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lipscomb at ucar.edu Wed Mar 14 15:16:41 2018 From: lipscomb at ucar.edu (William Lipscomb) Date: Wed, 14 Mar 2018 15:16:41 -0600 Subject: [Liwg-core] [Cesm2control] 20th century from 280 In-Reply-To: <9773F252-BF98-4C82-A829-B39130D1759A@tudelft.nl> References: <236095CF-F037-4A28-B000-AE4F89AFFC46@colorado.edu> <1A4094BB-3DAE-479D-B2B0-1EF6D80664DA@uu.nl> <9F98BBFD-826D-4C24-B626-87E08ED7F387@tudelft.nl> <5AA94923.5010903@ucar.edu> <9773F252-BF98-4C82-A829-B39130D1759A@tudelft.nl> Message-ID: Hi MIren, I agree that the sensitivity is striking. I think the issue with turning off the repartition would be similar to the snow depth, in that we'd have different physics for different landunits (and effectively, for different geographic regions). This could be justified for specific applications but might not be a good idea as a default model feature. Does CAM give excessive liquid precip in the Canadian Archipelago (prior to conversion)? If so, the model changes might go in the right direction but not for the right physical reasons. Bill L. On Wed, Mar 14, 2018 at 2:50 PM, Miren Vizcaino wrote: > I am surprised that nobody has reacted on the strong model sensitivity to > turning off the repartition, which goes in the right direction to remove > the biased permanent snow cover in the Canadian Archipelago (page 1 of the > PDF). > > Miren > > On 14 Mar 2018, at 17:24, William Lipscomb wrote: > > Hi Bill, > > Thanks, that makes sense. I think it's good for LIWG users to know that > this is a potential tuning knob (in case it would reduce some biases and > improve their science), but I agree that it's problematic for non-expert > users to have different physics in different landunits. > > Bill L. > > > On Wed, Mar 14, 2018 at 10:09 AM, Bill Sacks wrote: > >> In response to Bill L's question: >> >> Are there physical or other reasons why this would not be a good idea? >> (I'm thinking only of runs without an evolving or interactive ice sheet.) >> >> We've discussed this at least once, and I think a few times over the last >> few years, and have decided that it's best to keep things consistent. CLM >> tries to keep physics consistent from one region to another, as much as >> possible. Similarly, the philosophy (which admittedly is not always >> followed, but I believe it's still a guiding principle) is that different >> landunits should have the same physics as much as possible, unless there's >> a good reason why they cannot / should not. We've needed to introduce some >> differences between ice sheets and mountain glaciers over the last couple >> of years, but this has always been done with reluctance. One big problem >> with making things inconsistent is that you are no longer allowing >> properties to emerge from the model, but instead are forcing the model to >> behave the way you want it to. >> >> These kinds of differences are especially problematic for members of the >> community who are less in the know about all of the details of the model. >> We've seen how much trouble it causes when vegetated landunits behave >> differently from glacier landunits in some respects (analyses members of >> this group have done trying to understand behavior of the tundra), and I've >> been pushing to remove these differences as much as possible. >> >> Bill S >> >> >> On 3/14/18, 8:47 AM, William Lipscomb wrote: >> >> Hi Miren, >> >> Thanks for the analysis. I was wondering: Do we have any analyses of >> climate runs where the snow cap is lower for non-glaciated regions? Are >> there physical or other reasons why this would not be a good idea? (I'm >> thinking only of runs without an evolving or interactive ice sheet.) >> >> Bill L. >> >> On Wed, Mar 14, 2018 at 8:30 AM, Miren Vizcaino >> wrote: >> >>> Hi, >>> >>> Raymond checked #286 and the snow cover is growing both in Canadian >>> Archipelago and Greenland, >>> >>> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >>> rol.all.286/lnd/b.e20.B1850.f09_g17.pi_control.all.286.11_20 >>> -b.e20.B1850.f09_g17.pi_control.all.280.11_20/set6/set6_ >>> landf_Greenland.png >>> >>> which in part relates to the fact that the snow has been reset in this >>> run, while it was not in #279 (see below) >>> >>> The run stops at year 20, does anybody know why? >>> >>> On a related topic to the Canadian Archipelago, Raymond did a >>> sensitivity run on climate impact of the repartition, and found that it >>> increases the permanent snow cover area in Canadian Archipelago and NW >>> Canada (feedback more snow, cooling, more snow), cools a large North >>> American area and Greenland by 1 K in the annual mean and 4 K in DJF, and >>> results in changes in the ITCZ (see plots below) >>> >>> If the snow cover over the Canadian Archipelago is such an issue at the >>> moment, there are two things that can be tried: >>> >>> - turning off the repartition >>> >>> - lowering the snow cap for non-glaciated regions >>> >>> At Delft we have a problem with the permanent snow cover over the >>> Canadian Archipielago, since it results in advected cold air over >>> Greenland. >>> >>> Cheers, Miren >>> >>> >>> On Mar 12, 2018, at 12:35 PM, Miren Vizcaino >>> wrote: >>> >>> Hi All, >>> >>> Raymond and Laura checked 279 vs 266 and found that the snow cover is >>> going strongly down in the Canadian Archipelago, but not in Greenland >>> >>> >>> From Raymond: >>> >>> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >>> rol.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40 >>> -b.e20.B1850.f09_g17.pi_control.all.266.21_40/setsIndex.html). It does >>> not look like the Greenland snow is much affected by this bug fix as it >>> produces similar amounts of runoff in both runs. >>> >>> However, it seems to have a high impact on the snow cover in Canadian >>> Arctic. The snow w.e. reduces from ~7200mm to ~4700mm over 40 years under >>> preindustrial conditions. >>> >>> Temperatures increase over the Canadian Archipelago, and also over >>> Greenland (because of heat advection from this area). It is likely that if >>> the model gets to simulate a seasonal snow cover over the Canadian >>> Archipelago, we see much improved heat advection over Greenland. >>> >>> We are checking #286 at the moment. >>> >>> I just checked H2OSNO in #286 and it has grown > 1 m in August , after >>> 20 years of runtime. >>> >>> >>> Leo, over which area is this? >>> >>> Thanks, Miren >>> >>> >>> >>> >>> On Mar 8, 2018, at 11:10 AM, Kampenhout, L. van (Leo) < >>> L.vanKampenhout at uu.nl> wrote: >>> >>> I just checked H2OSNO in #286 and it has grown > 1 m in August , after >>> 20 years of runtime. >>> >>> Leo >>> >>> >>> >>> On 6 Mar 2018, at 22:51, Jan Lenaerts wrote: >>> >>> Hi all, >>> >>> This is potentially relevant for us as well. It *could be *that the >>> Greenland tundra problem has been resolved (or will be in a transient >>> simulation) because of bug found in the energy conservation over land (as >>> far as I have tracked the recent meetings). Don?t know the exact details >>> though. >>> >>> Cheers, >>> >>> Jan >>> >>> >>> >>> >>> Begin forwarded message: >>> >>> *From: *David Lawrence >>> *Subject: **Re: [Cesm2control] 20th century from 280* >>> *Date: *6 March 2018 at 14:46:55 GMT-7 >>> *To: *Cecile Hannay >>> *Cc: *cesm2control >>> >>> I just checked and land is definitely fluxing water out as it comes to >>> new equilibrium in all runs post-279. It looks like the big part of the >>> flux is done after about 50 years so could restart 284/285 with land IC >>> from the longest of the post-279 runs. >>> >>> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >>> rol.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40 >>> -b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_hydro_Polar.png >>> >>> http://webext.cgd.ucar.edu/B1850/b.e20.B1850.f09_g17.pi_cont >>> rol.all.279/lnd/b.e20.B1850.f09_g17.pi_control.all.279.21_40 >>> -b.e20.B1850.f09_g17.pi_control.all.266.21_40/set6/set6_ >>> snowliqIce_Polar.png >>> >>> It looks like it is mostly from Canadian Archipelago. >>> >>> Dave >>> >>> >>> >>> On Tue, Mar 6, 2018 at 2:41 PM, Cecile Hannay wrote: >>> >>> Hi Alice, >>> I was planningto extent 283 to 100 years in any case. What I meant was: >>> should we wait to reach 100-year before starting the 20th century >>> (283_20thC) ? >>> Maybe we can start the 20th century anyway and if 283 freezes, we can >>> decide what to do with the 20th century (to continue or to stop it). >>> Cecile >>> >>> >>> >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> Cecile Hannay >>> National Center for Atmospheric Research >>> email: hannay at ucar.edu >>> phone: 303-497-1327 <%28303%29%20497-1327> >>> webpage: http://www.cgd.ucar.edu/staff/hannay/ >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> >>> >>> On Tue, Mar 6, 2018 at 2:37 PM, Alice DuVivier >>> wrote: >>> >>> I think it would be good to keep 283 going longer as well. It would also >>> let us know if Keith's hypothesis about an initial pulse of fresh water >>> (from the 262c initial condition, pre land bug fix so with deeper snow) >>> would maybe level out over time. >>> >>> -------------------------------------------------------- >>> Alice K. DuVivier >>> email: duvivier at ucar.edu >>> phone: 303-497-1786 <%28303%29%20497-1786> >>> >>> Associate Scientist II >>> National Center for Atmospheric Research >>> Climate and Global Dynamics Laboratory >>> PO Box 3000, Boulder, CO 80307-3000 >>> -------------------------------------------------------- >>> >>> >>> On Tue, Mar 6, 2018 at 2:31 PM, Cecile Hannay wrote: >>> >>> Should I wait until 283 reaches 100 years before starting the 20 >>> century. >>> I can use the IC of 280 at year 60, but this is only a 60-year run. We >>> have seen the Laborador Sea freezing after 60 years. A better threshold to >>> decide whether a run will freeze or not was 100 years. >>> >>> >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> Cecile Hannay >>> National Center for Atmospheric Research >>> email: hannay at ucar.edu >>> phone: 303-497-1327 <%28303%29%20497-1327> >>> webpage: http://www.cgd.ucar.edu/staff/hannay/ >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> >>> >>> On Tue, Mar 6, 2018 at 2:24 PM, Keith Lindsay wrote: >>> >>> Hi, >>> >>> Attached is a plot of annual Lab Sea IFRAC and surface salinity from >>> recent runs. >>> >>> The 279+ runs are fresher than 266 in the Lab Sea. >>> I wonder the land energy fix is melting the built up snow, >>> and producing a pulse of runoff. >>> >>> Keith >>> >>> On Tue, Mar 6, 2018 at 1:47 PM, Alice DuVivier >>> wrote: >>> >>> Alright everyone, I'm the bearer of bad news: >>> In 284 >>> >>> the Lab Sea froze over and 285 >>> >>> looks like it is about to freeze over. For these reasons, Marika, Dave, and >>> I are not in favor of continuing 284 and 285. There are just too many >>> uncertainties about what in the ozone or gravity wave additions could be >>> triggering the freezing. Julio thinks we're simulating anti-shortwave >>> radiation. ;) >>> >>> On that note, 280 and 283 are pretty similar. Neither freezes over and >>> both have thinner ice than 269. The southern hemispheres look pretty >>> similar as well. It would be good to continue 283 out longer just to be >>> sure and to see a 20th century off this run (as Cecile details). But for >>> now our preference is definitely 283. >>> >>> Cheers, >>> Alice >>> >>> >>> -------------------------------------------------------- >>> Alice K. DuVivier >>> email: duvivier at ucar.edu >>> phone: 303-497-1786 <%28303%29%20497-1786> >>> >>> Associate Scientist II >>> National Center for Atmospheric Research >>> Climate and Global Dynamics Laboratory >>> PO Box 3000, Boulder, CO 80307-3000 >>> -------------------------------------------------------- >>> >>> >>> On Tue, Mar 6, 2018 at 11:52 AM, Cecile Hannay wrote: >>> >>> Hi Gokhan, >>> >>> We talked about stating a 20th century from 280. >>> >>> I am thinking that I would prefer the code from 283 (instead of 280). >>> The two simulations are very, very similar (for the AMWG diag). The code >>> from 283 is "more bugfree". >>> >>> So basically, I would like to start 283_20thC and not 280_20thC. The >>> reason we talked about starting 280 was that that run was longer and >>> therefore more spunup (280 is 60 years while 283 is only 30 years). >>> If we want to use the more spunup state, we could use the initial form >>> the end of 280 to start 283_20thC. >>> >>> Thoughts ? >>> >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> Cecile Hannay >>> National Center for Atmospheric Research >>> email: hannay at ucar.edu >>> phone: 303-497-1327 <%28303%29%20497-1327> >>> webpage: http://www.cgd.ucar.edu/staff/hannay/ >>> ++++++++++++++++++++++++++++++++++++++++++++ >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> _______________________________________________ >>> Cesm2control mailing list >>> Cesm2control at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/cesm2control >>> >>> >>> - - - - - - - - - - - - - - - - - - - - - - - - - - - >>> >>> *Jan Lenaerts *Assistant Professor >>> Department of Atmospheric and Oceanic Sciences >>> University of Colorado Boulder >>> Mail: 311 UCB | Boulder, CO 80309-0311 >>> Phone: +1-303-735-5471 <%28303%29%20735-5471> >>> Office: SEEC N257 >>> @lenaertsjan || website >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >>> >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >>> >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >>> >>> >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >>> >> >> >> -- >> William Lipscomb >> Climate & Global Dynamics >> National Center for Atmospheric Research >> 1850 Table Mesa Drive >> Boulder, CO 80305 >> >> (505) 699-8016 >> >> _______________________________________________ >> Liwg-core mailing listLiwg-core at cgd.ucar.eduhttp://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >> >> >> > > > -- > William Lipscomb > Climate & Global Dynamics > National Center for Atmospheric Research > 1850 Table Mesa Drive > Boulder, CO 80305 > (505) 699-8016 > > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > -- William Lipscomb Climate & Global Dynamics National Center for Atmospheric Research 1850 Table Mesa Drive Boulder, CO 80305 (505) 699-8016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.vanKampenhout at uu.nl Thu Mar 15 15:01:26 2018 From: L.vanKampenhout at uu.nl (Kampenhout, L. van (Leo)) Date: Thu, 15 Mar 2018 21:01:26 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> <5AA7F6EB.1010403@ucar.edu> Message-ID: <8ABAC31A-418F-4DE5-BE47-C58441F370E5@uu.nl> hi all, I just wanted to chime in on one particular thing that Jeremy said: And no I didn't ever assess the size of the coupler files. Also, it is useful (but not critical) in practice to keep interim restart files on disk to rewind simulations as needed at short notice. In my experience interim restart files get deleted when running st_archive , the short-term archiver. @Bill S, do you know why this is and how to turn that off? This may not be a problem when your restart frequency equals the stopping frequency. Leo On 13 Mar 2018, at 17:22, Laura Muntjewerf - CITG > wrote: Hi all, Thanks Bill, that seems pragmatic. @Marcus, size of individual daily coupler files: - 36 MB for ha2x3h - 21 MB for ha2x1hi - 11 MB for ha2x1h Laura On 13 Mar 2018, at 17:06, Bill Sacks > wrote: Hi all, It feels like there is some convergence on more accurate requirements, though I'm not following this closely enough to be able to come up with those numbers myself. Once someone puts together a revised estimate, I'd suggest running it by Gary Strand. Bill On 3/13/18, 9:53 AM, Laura Muntjewerf - CITG wrote: Hi all, Thanks Marcus, for the numbers you made. That is quite a reduction in output size from #288. For the size of the coupler files, that should be 0.75 TB. Based on a 30-year B-run code #260 [the sum of ha2x3h - ha2x1h - ha2x1h]. Earlier I said 1.5 TB but I double-counted (I merged them to use as forcing files, and took the size of the entire folder; I didn?t delete the original files). Sorry for the confusion. Bill, yes, you are right that it?s not sensible to want to keep the entire BG-JG spin-up in long-term storage. It is, I feel precautious because I don?t have much sense on what is important to save and what not - but that cannot be an argument. On this matter, I am happy there is so much input in this email thread. Laura On 13 Mar 2018, at 15:28, Jeremy Fyke > wrote: Hey, I think those estimates for JG/BG storage costs are roughly 10x what I experienced with out of box settings. Suspect that 288 is using enhanced high frequency output to diagnose bugs, etc? And no I didn't ever assess the size of the coupler files. Also, it is useful (but not critical) in practice to keep interim restart files on disk to rewind simulations as needed at short notice. Jer On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks > wrote: Thanks a lot for this, Marcus. Bill On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: Hi Laura + Bill + Jeremy, Sorry for coming in late to this discussion, I was out of the office last week. We can easily reduce the output size of the monthly history files, as only a small number of variables are needed to ensure that the model climate is behaving as it should. Similarly, we only need to save the restart files from the end of each JG or BG run segment, which will help as well (see below). We do however need to store the high-frequency coupler history data locally, as it is used to drive the "next" JG segment. This dataset is quite large as several (2D) fields are written a few times per day over ~30 years. The good news is that we can move it from /glade to some other temporary (or indeed permanent) storage space when starting the next BG segment. The bad news is that high-frequency output is quite noisy and therefore is not compressing particularly well. Jeremy, do you have a sense for how large the coupler history data is from each BG segment? Some work has been devoted to reduce the output size of the monthly history output in the last few months. These numbers are based on the standard output from sim #288: CAM: 947MB per month -- 30 years is ~340 GB 2D: 242 variables 3D: 132 variables POP: 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB 2D: 147 variables 3D: 58 variables CLM: 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB 2D: 422 variables 3D: 35 variables ICE: 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB 2D: 104 variables 3D: 12 variables GLC: 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB ROF: 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB Restart: 13 GB per write One BG run (30 years of all components + one set of restart files): ~920 GB One JG run (150 years of all components but the atmosphere + one set of restart files): ~2.85 TB 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB Note that these numbers show the uncompressed size of the standard output. There are a lot of fields that we don't need in order to making sure that the model climate is behaving well. Big savings can be made by reducing the number of vertically resolved fields and only saving variables that we know that we want to look at and/or are used by the diagnostic packages. I think we can easily reduce the output size by a factor of 10, if not more. I have started a shared document (you should be able to edit) listing fields we want to save. I will work on creating a more complete list when Cheyenne is back online. https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing Best, Marcus On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks > wrote: Hi Laura, I'm having trouble reconciling all the numbers here, so still feel I'm not totally understanding this. So I'll reply a bit more generally rather than trying to address specific numbers: In general, we're being asked by CISL (the computing group) to think carefully about what we really need to store long-term. Disk space (and HPSS storage) is at a higher premium than processor time these days. So along those lines, I wonder if it would be possible to reduce the long-term storage needs by reducing the number of variables output by various components (especially multi-level variables) and/or output frequency ? e.g., outputting most variables as annual rather than monthly averages for much of the spin-up period? I believe the land group uses one or both of these strategies when doing its land-only spinups. It would also be worth writing to Gary Strand . He's the person here who has the best sense of the storage landscape, and could perhaps give some suggestions for how to proceed. Bill S On 3/9/18, 9:16 AM, Jeremy Fyke wrote: Hi all In my experience it?s best to keep at least 10TB of disc available for JG/BG. This covers retention of all run files for a few iterations. Most necessary to retain are the frequent coupler history files from the BG to drive the next JG, and the set of restart files needed for the next iteration JG or BG). But for analysis it?s nice to have a few iterations of history files available on demand. I?m sure one could more carefully parse things to reduce the files needed on disk. In my experience though I just ended up accidentally archiving needed files when I tried to pick and choose. So Id just recommend keeping the last few iterations on disk in full (as well as archiving things promptly). Jer On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG > wrote: Hi Bill, Thanks for your reply. Firstly let me say that, as far as I?m aware, Marcus will be running. I am making numbers, just for practical reasons because I was looking into it before. Regarding your question on this 1.5 TB that needs to be kept from a BG: this is the restart files but also 30 years of coupler files (ha2x3h - ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t know the ha2x1d. On your point 1): yes, for effectively carrying out the spin-up a temporary doubling of the scratch space should suffice. This will require some time/coordination in moving of output in-between jobs, but that should be fine. On point 2): I would like to have all of the BG-JG long-term stored. I estimate this to be ~75 TB. Finding a long-term storage space indeed would accommodate that. Laura On 9 Mar 2018, at 14:14, Bill Sacks > wrote: Hi Laura, So, if I'm understanding this correctly: roughly speaking, you'll need a peak short-term storage space of close to 10 TB, and then longer-term storage of a few TB. Does that sound about right? (I'm a little confused by the value of 1.5 TB that you give for a restart set: that sounds awfully high for one set of restarts: I'd expect something more like 10s of GBs at most. But I'm not sure that the final numbers change based on revising that downwards.) So do you feel this can be accommodated by (1) temporarily doubling your scratch space, and then (2) finding a long-term storage space for a few TBs? Please let me know if I'm misunderstanding / misinterpreting this. Thanks, Bill S On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: Hi Bill, Those are good questions. I suppose it involves decision-making. 1a). Inside one BG-JG iteration, we need to keep the coupler and restart files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is necessary or we hit some other problem, I think it?s good to keep around in scratch the last set of BG restart and coupler files that made a successful JG. So minimum peak scratch requirement during the spin-up, a JG[n] is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in case of problems) = 8.2 TB 1b). After completion of one BG-JG iteration for short-term storage, I suppose you are referring to the analysis that needs to be done on the output? There are a number of variables that are good to keep the complete timeseries of, but mostly we are interested in the end state. I don?t know how much exactly, but to give a rough estimate; I expect it in the order of some 100s GB per simulation. 2). For the longer term, I would like to have it integrally stored on HPSS at least for a few years. Laura On 8 Mar 2018, at 16:46, Bill Sacks > wrote: Hi Laura, Thanks very much for putting this together! I don't think /glade/p/cesm/liwg is a viable option right now: this falls under this nearly-full quota: /glade/p/cesm 198.48 TB 200.22 TB 99.13 % In order to determine the best space(s) for this, it would help to know: (1) How much of the data from one BG-JG iteration do you need to save once that iteration is complete? Is it really necessary to keep all of these data around, or can you delete most of it and (for example) just keep a set of restart files around to facilitate backing up or rerunning segments if you need to? (2) How much of these data need to be kept medium-long-term ? e.g., for a year, for a few years, or longer? About a year ago, CISL announced a new plan for data storage that I thought was supposed to give us all more disk space, but I haven't heard what (if anything) came of that. We can look into that if that would help. But first it would help to know how much of these data really need to be kept and for what length of time. Thanks, Bill S On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: Hi all, For the BG-JG spin-up, I put together a document on data space we require [estimate], and ways to facilitate the archiving. Please find attached. Feel welcome to comment. https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? Laura _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -- Marcus L?fverstr?m (PhD) Post-doctoral researcher National Center for Atmospheric Research 1850 Table Mesa Dr. 80305 Boulder, CO, USA https://sites.google.com/site/lofverstrom/ _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacks at ucar.edu Thu Mar 15 15:14:51 2018 From: sacks at ucar.edu (Bill Sacks) Date: Thu, 15 Mar 2018 15:14:51 -0600 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <8ABAC31A-418F-4DE5-BE47-C58441F370E5@uu.nl> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> <5AA7F6EB.1010403@ucar.edu> <8ABAC31A-418F-4DE5-BE47-C58441F370E5@uu.nl> Message-ID: <5AAAE24B.4060705@ucar.edu> If I remember correctly, Leo's statement just applies to restart files written mid-run, whereas I think Jeremy is referring to restart files written at the end of a run, from a few runs ago. Leo, I think you can achieve what you want by setting DOUT_S_SAVE_INTERIM_RESTART_FILES to TRUE (an xml variable). Bill On 3/15/18, 3:01 PM, Kampenhout, L. van (Leo) wrote: > hi all, > > I just wanted to chime in on one particular thing that Jeremy said: > >>>>> And no I didn't ever assess the size of the coupler files. Also, >>>>> it is useful (but not critical) in practice to keep interim >>>>> restart files on disk to rewind simulations as needed at short >>>>> notice. >>>> > > In my experience interim restart files get deleted when running > st_archive , the short-term archiver. @Bill S, do you know why this is > and how to turn that off? > > This may not be a problem when your restart frequency equals the > stopping frequency. > > Leo > > > >> On 13 Mar 2018, at 17:22, Laura Muntjewerf - CITG >> > wrote: >> >> Hi all, >> >> Thanks Bill, that seems pragmatic. >> >> @Marcus, size of individual daily coupler files: >> - 36 MB for ha2x3h >> - 21 MB for ha2x1hi >> - 11 MB for ha2x1h >> >> Laura >> >>> On 13 Mar 2018, at 17:06, Bill Sacks >> > wrote: >>> >>> Hi all, >>> >>> It feels like there is some convergence on more accurate >>> requirements, though I'm not following this closely enough to be >>> able to come up with those numbers myself. Once someone puts >>> together a revised estimate, I'd suggest running it by Gary Strand. >>> >>> Bill >>> >>> On 3/13/18, 9:53 AM, Laura Muntjewerf - CITG wrote: >>>> Hi all, >>>> >>>> Thanks Marcus, for the numbers you made. >>>> That is quite a reduction in output size from #288. >>>> >>>> For the size of the coupler files, that should be 0.75 TB. >>>> Based on a 30-year B-run code #260 [the sum of ha2x3h - ha2x1h - >>>> ha2x1h]. >>>> Earlier I said 1.5 TB but I double-counted (I merged them to use as >>>> forcing files, and took the size of the entire folder; I didn?t >>>> delete the original files). >>>> Sorry for the confusion. >>>> >>>> Bill, yes, you are right that it?s not sensible to want to keep the >>>> entire BG-JG spin-up in long-term storage. It is, I feel >>>> precautious because I don?t have much sense on what is important to >>>> save and what not - but that cannot be an argument. On this matter, >>>> I am happy there is so much input in this email thread. >>>> >>>> Laura >>>> >>>> >>>> >>>>> On 13 Mar 2018, at 15:28, Jeremy Fyke >>>> > wrote: >>>>> >>>>> Hey, >>>>> >>>>> I think those estimates for JG/BG storage costs are roughly 10x >>>>> what I experienced with out of box settings. Suspect that 288 is >>>>> using enhanced high frequency output to diagnose bugs, etc? >>>>> >>>>> And no I didn't ever assess the size of the coupler files. Also, >>>>> it is useful (but not critical) in practice to keep interim >>>>> restart files on disk to rewind simulations as needed at short >>>>> notice. >>>>> >>>>> Jer >>>>> >>>>> On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks >>>> > wrote: >>>>> >>>>> Thanks a lot for this, Marcus. >>>>> >>>>> >>>>> Bill >>>>> >>>>> >>>>> On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: >>>>>> Hi Laura + Bill + Jeremy, >>>>>> >>>>>> Sorry for coming in late to this discussion, I was out of the >>>>>> office last week. >>>>>> >>>>>> We can easily reduce the output size of the monthly history >>>>>> files, as only a small number of variables are needed to >>>>>> ensure that the model climate is behaving as it should. >>>>>> Similarly, we only need to save the restart files from the >>>>>> end of each JG or BG run segment, which will help as well >>>>>> (see below). >>>>>> >>>>>> We do however need to store the high-frequency coupler >>>>>> history data locally, as it is used to drive the "next" JG >>>>>> segment. This dataset is quite large as several (2D) fields >>>>>> are written a few times per day over ~30 years. The good news >>>>>> is that we can move it from /glade to some other temporary >>>>>> (or indeed permanent) storage space when starting the next BG >>>>>> segment. The bad news is that high-frequency output is quite >>>>>> noisy and therefore is not compressing particularly well. >>>>>> >>>>>> Jeremy, do you have a sense for how large the coupler history >>>>>> data is from each BG segment? >>>>>> >>>>>> >>>>>> Some work has been devoted to reduce the output size of the >>>>>> monthly history output in the last few months. These numbers >>>>>> are based on the standard output from sim #288: >>>>>> >>>>>> *CAM:* >>>>>> 947MB per month -- 30 years is ~340 GB >>>>>> 2D: 242 variables >>>>>> 3D: 132 variables >>>>>> >>>>>> *POP:* >>>>>> 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB >>>>>> 2D: 147 variables >>>>>> 3D: 58 variables >>>>>> >>>>>> *CLM:* >>>>>> 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB >>>>>> 2D: 422 variables >>>>>> 3D: 35 variables >>>>>> >>>>>> *ICE:* >>>>>> 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB >>>>>> 2D: 104 variables >>>>>> 3D: 12 variables >>>>>> >>>>>> *GLC:* >>>>>> 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB >>>>>> >>>>>> *ROF:* >>>>>> 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB >>>>>> >>>>>> *Restart:* >>>>>> 13 GB per write >>>>>> >>>>>> One BG run (30 years of all components + one set of restart >>>>>> files): >>>>>> ~920 GB >>>>>> >>>>>> One JG run (150 years of all components but the atmosphere + >>>>>> one set of restart files): >>>>>> ~2.85 TB >>>>>> >>>>>> 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) >>>>>> = ~30.1 TB >>>>>> >>>>>> Note that these numbers show the uncompressed size of the >>>>>> standard output. There are a lot of fields that we don't need >>>>>> in order to making sure that the model climate is behaving >>>>>> well. Big savings can be made by reducing the number of >>>>>> vertically resolved fields and only saving variables that we >>>>>> know that we want to look at and/or are used by the >>>>>> diagnostic packages. >>>>>> >>>>>> I think we can easily reduce the output size by a factor of >>>>>> 10, if not more. I have started a shared document (you should >>>>>> be able to edit) listing fields we want to save. I will work >>>>>> on creating a more complete list when Cheyenne is back online. >>>>>> >>>>>> https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing >>>>>> >>>>>> Best, >>>>>> Marcus >>>>>> >>>>>> On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks >>>>> > wrote: >>>>>> >>>>>> Hi Laura, >>>>>> >>>>>> I'm having trouble reconciling all the numbers here, so >>>>>> still feel I'm not totally understanding this. So I'll >>>>>> reply a bit more generally rather than trying to address >>>>>> specific numbers: >>>>>> >>>>>> In general, we're being asked by CISL (the computing >>>>>> group) to think carefully about what we really need to >>>>>> store long-term. Disk space (and HPSS storage) is at a >>>>>> higher premium than processor time these days. So along >>>>>> those lines, I wonder if it would be possible to reduce >>>>>> the long-term storage needs by reducing the number of >>>>>> variables output by various components (especially >>>>>> multi-level variables) and/or output frequency ? e.g., >>>>>> outputting most variables as annual rather than monthly >>>>>> averages for much of the spin-up period? I believe the >>>>>> land group uses one or both of these strategies when >>>>>> doing its land-only spinups. >>>>>> >>>>>> It would also be worth writing to Gary Strand >>>>>> . He's the >>>>>> person here who has the best sense of the storage >>>>>> landscape, and could perhaps give some suggestions for >>>>>> how to proceed. >>>>>> >>>>>> Bill S >>>>>> >>>>>> >>>>>> On 3/9/18, 9:16 AM, Jeremy Fyke wrote: >>>>>>> Hi all >>>>>>> >>>>>>> In my experience it?s best to keep at least 10TB of disc >>>>>>> available for JG/BG. This covers retention of all run >>>>>>> files for a few iterations. Most necessary to retain >>>>>>> are the frequent coupler history files from the BG to >>>>>>> drive the next JG, and the set of restart files needed >>>>>>> for the next iteration JG or BG). But for analysis it?s >>>>>>> nice to have a few iterations of history files available >>>>>>> on demand. >>>>>>> >>>>>>> I?m sure one could more carefully parse things to reduce >>>>>>> the files needed on disk. In my experience though I >>>>>>> just ended up accidentally archiving needed files when I >>>>>>> tried to pick and choose. So Id just recommend keeping >>>>>>> the last few iterations on disk in full (as well as >>>>>>> archiving things promptly). >>>>>>> >>>>>>> Jer >>>>>>> >>>>>>> >>>>>>> On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG >>>>>>> >>>>>> > wrote: >>>>>>> >>>>>>> Hi Bill, >>>>>>> >>>>>>> Thanks for your reply. >>>>>>> >>>>>>> Firstly let me say that, as far as I?m aware, Marcus >>>>>>> will be running. I am making numbers, just for >>>>>>> practical reasons because I was looking into it before. >>>>>>> >>>>>>> Regarding your question on this 1.5 TB that needs to >>>>>>> be kept from a BG: this is the restart files but >>>>>>> also 30 years of coupler files (ha2x3h - ha2x1h - >>>>>>> ha2x1h - ha2x1d). I am probably overestimating >>>>>>> because I don?t know the ha2x1d. >>>>>>> >>>>>>> On your point 1): yes, for effectively carrying out >>>>>>> the spin-up a temporary doubling of the scratch >>>>>>> space should suffice. This will require some >>>>>>> time/coordination in moving of output in-between >>>>>>> jobs, but that should be fine. >>>>>>> >>>>>>> On point 2): I would like to have all of the BG-JG >>>>>>> long-term stored. I estimate this to be ~75 TB. >>>>>>> Finding a long-term storage space indeed would >>>>>>> accommodate that. >>>>>>> >>>>>>> Laura >>>>>>> >>>>>>> >>>>>>>> On 9 Mar 2018, at 14:14, Bill Sacks >>>>>>> > wrote: >>>>>>>> >>>>>>>> Hi Laura, >>>>>>>> >>>>>>>> So, if I'm understanding this correctly: roughly >>>>>>>> speaking, you'll need a peak short-term storage >>>>>>>> space of close to 10 TB, and then longer-term >>>>>>>> storage of a few TB. Does that sound about right? >>>>>>>> (I'm a little confused by the value of 1.5 TB that >>>>>>>> you give for a restart set: that sounds awfully >>>>>>>> high for one set of restarts: I'd expect something >>>>>>>> more like 10s of GBs at most. But I'm not sure that >>>>>>>> the final numbers change based on revising that >>>>>>>> downwards.) >>>>>>>> >>>>>>>> So do you feel this can be accommodated by (1) >>>>>>>> temporarily doubling your scratch space, and then >>>>>>>> (2) finding a long-term storage space for a few TBs? >>>>>>>> >>>>>>>> Please let me know if I'm misunderstanding / >>>>>>>> misinterpreting this. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Bill S >>>>>>>> >>>>>>>> On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: >>>>>>>>> Hi Bill, >>>>>>>>> >>>>>>>>> Those are good questions. I suppose it involves >>>>>>>>> decision-making. >>>>>>>>> >>>>>>>>> 1a). Inside one BG-JG iteration, we need to keep >>>>>>>>> the coupler and restart files from the BG. One set >>>>>>>>> is ~1.5 TB. Pragmatically in case rerunning is >>>>>>>>> necessary or we hit some other problem, I think >>>>>>>>> it?s good to keep around in scratch the last set >>>>>>>>> of BG restart and coupler files that made a >>>>>>>>> successful JG. >>>>>>>>> So minimum peak scratch requirement during the >>>>>>>>> spin-up, a JG[n] is about to finish: 1.5 TB >>>>>>>>> BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB >>>>>>>>> BG[n-1]_forcing (in case of problems) = 8.2 TB >>>>>>>>> >>>>>>>>> 1b). After completion of one BG-JG iteration for >>>>>>>>> short-term storage, I suppose you are referring to >>>>>>>>> the analysis that needs to be done on the output? >>>>>>>>> There are a number of variables that are good to >>>>>>>>> keep the complete timeseries of, but mostly we are >>>>>>>>> interested in the end state. I don?t know how much >>>>>>>>> exactly, but to give a rough estimate; I expect it >>>>>>>>> in the order of some 100s GB per simulation. >>>>>>>>> >>>>>>>>> 2). For the longer term, I would like to have it >>>>>>>>> integrally stored on HPSS at least for a few years. >>>>>>>>> >>>>>>>>> >>>>>>>>> Laura >>>>>>>>> >>>>>>>>> >>>>>>>>>> On 8 Mar 2018, at 16:46, Bill Sacks >>>>>>>>>> > wrote: >>>>>>>>>> >>>>>>>>>> Hi Laura, >>>>>>>>>> >>>>>>>>>> Thanks very much for putting this together! >>>>>>>>>> >>>>>>>>>> I don't think /glade/p/cesm/liwg is a viable >>>>>>>>>> option right now: this falls under this >>>>>>>>>> nearly-full quota: >>>>>>>>>> >>>>>>>>>> /glade/p/cesm 198.48 TB >>>>>>>>>> 200.22 TB 99.13 % >>>>>>>>>> >>>>>>>>>> In order to determine the best space(s) for this, >>>>>>>>>> it would help to know: >>>>>>>>>> >>>>>>>>>> (1) How much of the data from one BG-JG iteration >>>>>>>>>> do you need to save once that iteration is >>>>>>>>>> complete? Is it really necessary to keep all of >>>>>>>>>> these data around, or can you delete most of it >>>>>>>>>> and (for example) just keep a set of restart >>>>>>>>>> files around to facilitate backing up or >>>>>>>>>> rerunning segments if you need to? >>>>>>>>>> >>>>>>>>>> (2) How much of these data need to be kept >>>>>>>>>> medium-long-term ? e.g., for a year, for a few >>>>>>>>>> years, or longer? >>>>>>>>>> >>>>>>>>>> About a year ago, CISL announced a new plan for >>>>>>>>>> data storage that I thought was supposed to give >>>>>>>>>> us all more disk space, but I haven't heard what >>>>>>>>>> (if anything) came of that. We can look into that >>>>>>>>>> if that would help. But first it would help to >>>>>>>>>> know how much of these data really need to be >>>>>>>>>> kept and for what length of time. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Bill S >>>>>>>>>> >>>>>>>>>> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >>>>>>>>>>> >>>>>>>>>>> Hi all, >>>>>>>>>>> >>>>>>>>>>> For the BG-JG spin-up, I put together a document >>>>>>>>>>> on data space we require [estimate], and ways to >>>>>>>>>>> facilitate the archiving. >>>>>>>>>>> Please find attached. Feel welcome to comment. >>>>>>>>>>> >>>>>>>>>>> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >>>>>>>>>>> >>>>>>>>>>> Now I come to think of it, maybe the >>>>>>>>>>> /glade/p/cesm/liwg is an option? >>>>>>>>>>> >>>>>>>>>>> Laura >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Liwg-core mailing list >>>>>>>>>>> Liwg-core at cgd.ucar.edu >>>>>>>>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Liwg-core mailing list >>>>>> Liwg-core at cgd.ucar.edu >>>>>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Marcus L?fverstr?m (PhD) >>>>>> Post-doctoral researcher >>>>>> National Center for Atmospheric Research >>>>>> 1850 Table Mesa Dr. >>>>>> >>>>>> 80305 Boulder, CO, USA >>>>>> >>>>>> >>>>>> https://sites.google.com/site/lofverstrom/ >>>>> >>>> >>> >> >> _______________________________________________ >> Liwg-core mailing list >> Liwg-core at cgd.ucar.edu >> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcusl at ucar.edu Thu Mar 15 15:32:37 2018 From: marcusl at ucar.edu (Marcus Lofverstrom) Date: Thu, 15 Mar 2018 15:32:37 -0600 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <5AAAE24B.4060705@ucar.edu> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> <5AA7F6EB.1010403@ucar.edu> <8ABAC31A-418F-4DE5-BE47-C58441F370E5@uu.nl> <5AAAE24B.4060705@ucar.edu> Message-ID: Hi, I haven't had that problem myself, but good to know for future simulations. Just to clarify, when I said that we only really need to save the rest files from the end of each run segment (JG# and BG#), I meant that as an absolute minimum. I agree that it would be good to save restart files every 10 years or so, just to have to opportunity to re-run if something goes wrong, or if we want to output other variables. Marcus On Thu, Mar 15, 2018 at 3:14 PM, Bill Sacks wrote: > If I remember correctly, Leo's statement just applies to restart files > written mid-run, whereas I think Jeremy is referring to restart files > written at the end of a run, from a few runs ago. > > Leo, I think you can achieve what you want by setting > DOUT_S_SAVE_INTERIM_RESTART_FILES to TRUE (an xml variable). > > Bill > > > On 3/15/18, 3:01 PM, Kampenhout, L. van (Leo) wrote: > > hi all, > > I just wanted to chime in on one particular thing that Jeremy said: > > And no I didn't ever assess the size of the coupler files. Also, it is > useful (but not critical) in practice to keep interim restart files on disk > to rewind simulations as needed at short notice. > > > > In my experience interim restart files get deleted when running st_archive > , the short-term archiver. @Bill S, do you know why this is and how to turn > that off? > > This may not be a problem when your restart frequency equals the stopping > frequency. > > Leo > > > > On 13 Mar 2018, at 17:22, Laura Muntjewerf - CITG > wrote: > > Hi all, > > Thanks Bill, that seems pragmatic. > > @Marcus, size of individual daily coupler files: > - 36 MB for ha2x3h > - 21 MB for ha2x1hi > - 11 MB for ha2x1h > > Laura > > On 13 Mar 2018, at 17:06, Bill Sacks wrote: > > Hi all, > > It feels like there is some convergence on more accurate requirements, > though I'm not following this closely enough to be able to come up with > those numbers myself. Once someone puts together a revised estimate, I'd > suggest running it by Gary Strand. > > Bill > > On 3/13/18, 9:53 AM, Laura Muntjewerf - CITG wrote: > > Hi all, > > Thanks Marcus, for the numbers you made. > That is quite a reduction in output size from #288. > > For the size of the coupler files, that should be 0.75 TB. > Based on a 30-year B-run code #260 [the sum of ha2x3h - ha2x1h - ha2x1h]. > Earlier I said 1.5 TB but I double-counted (I merged them to use as > forcing files, and took the size of the entire folder; I didn?t delete the > original files). > Sorry for the confusion. > > Bill, yes, you are right that it?s not sensible to want to keep the entire > BG-JG spin-up in long-term storage. It is, I feel precautious because I > don?t have much sense on what is important to save and what not - but that > cannot be an argument. On this matter, I am happy there is so much input in > this email thread. > > Laura > > > > On 13 Mar 2018, at 15:28, Jeremy Fyke wrote: > > Hey, > > I think those estimates for JG/BG storage costs are roughly 10x what I > experienced with out of box settings. Suspect that 288 is using enhanced > high frequency output to diagnose bugs, etc? > > And no I didn't ever assess the size of the coupler files. Also, it is > useful (but not critical) in practice to keep interim restart files on disk > to rewind simulations as needed at short notice. > > Jer > > On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks wrote: > >> Thanks a lot for this, Marcus. >> >> >> Bill >> >> >> On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: >> >> Hi Laura + Bill + Jeremy, >> >> Sorry for coming in late to this discussion, I was out of the office last >> week. >> >> We can easily reduce the output size of the monthly history files, as >> only a small number of variables are needed to ensure that the model >> climate is behaving as it should. Similarly, we only need to save the >> restart files from the end of each JG or BG run segment, which will help as >> well (see below). >> >> We do however need to store the high-frequency coupler history data >> locally, as it is used to drive the "next" JG segment. This dataset is >> quite large as several (2D) fields are written a few times per day over ~30 >> years. The good news is that we can move it from /glade to some other >> temporary (or indeed permanent) storage space when starting the next BG >> segment. The bad news is that high-frequency output is quite noisy and >> therefore is not compressing particularly well. >> >> Jeremy, do you have a sense for how large the coupler history data is >> from each BG segment? >> >> >> Some work has been devoted to reduce the output size of the monthly >> history output in the last few months. These numbers are based on the >> standard output from sim #288: >> >> *CAM:* >> 947MB per month -- 30 years is ~340 GB >> 2D: 242 variables >> 3D: 132 variables >> >> *POP:* >> 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB >> 2D: 147 variables >> 3D: 58 variables >> >> *CLM:* >> 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB >> 2D: 422 variables >> 3D: 35 variables >> >> *ICE:* >> 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB >> 2D: 104 variables >> 3D: 12 variables >> >> *GLC:* >> 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB >> >> *ROF:* >> 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB >> >> *Restart:* >> 13 GB per write >> >> One BG run (30 years of all components + one set of restart files): >> ~920 GB >> >> One JG run (150 years of all components but the atmosphere + one set of >> restart files): >> ~2.85 TB >> >> 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB >> >> Note that these numbers show the uncompressed size of the standard >> output. There are a lot of fields that we don't need in order to making >> sure that the model climate is behaving well. Big savings can be made by >> reducing the number of vertically resolved fields and only saving variables >> that we know that we want to look at and/or are used by the diagnostic >> packages. >> >> I think we can easily reduce the output size by a factor of 10, if not >> more. I have started a shared document (you should be able to edit) >> listing fields we want to save. I will work on creating a more complete >> list when Cheyenne is back online. >> >> https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y >> 3mVbRxpuv3nc14/edit?usp=sharing >> >> Best, >> Marcus >> >> On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks wrote: >> >>> Hi Laura, >>> >>> I'm having trouble reconciling all the numbers here, so still feel I'm >>> not totally understanding this. So I'll reply a bit more generally rather >>> than trying to address specific numbers: >>> >>> In general, we're being asked by CISL (the computing group) to think >>> carefully about what we really need to store long-term. Disk space (and >>> HPSS storage) is at a higher premium than processor time these days. So >>> along those lines, I wonder if it would be possible to reduce the long-term >>> storage needs by reducing the number of variables output by various >>> components (especially multi-level variables) and/or output frequency ? >>> e.g., outputting most variables as annual rather than monthly averages for >>> much of the spin-up period? I believe the land group uses one or both of >>> these strategies when doing its land-only spinups. >>> >>> It would also be worth writing to Gary Strand >>> . He's the person here who has the best sense of the >>> storage landscape, and could perhaps give some suggestions for how to >>> proceed. >>> >>> Bill S >>> >>> >>> On 3/9/18, 9:16 AM, Jeremy Fyke wrote: >>> >>> Hi all >>> >>> In my experience it?s best to keep at least 10TB of disc available for >>> JG/BG. This covers retention of all run files for a few iterations. Most >>> necessary to retain are the frequent coupler history files from the BG to >>> drive the next JG, and the set of restart files needed for the next >>> iteration JG or BG). But for analysis it?s nice to have a few iterations >>> of history files available on demand. >>> >>> I?m sure one could more carefully parse things to reduce the files >>> needed on disk. In my experience though I just ended up accidentally >>> archiving needed files when I tried to pick and choose. So Id just >>> recommend keeping the last few iterations on disk in full (as well as >>> archiving things promptly). >>> >>> Jer >>> >>> >>> On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG < >>> L.Muntjewerf at tudelft.nl> wrote: >>> >>>> Hi Bill, >>>> >>>> Thanks for your reply. >>>> >>>> Firstly let me say that, as far as I?m aware, Marcus will be running. I >>>> am making numbers, just for practical reasons because I was looking into it >>>> before. >>>> >>>> Regarding your question on this 1.5 TB that needs to be kept from a BG: >>>> this is the restart files but also 30 years of coupler files (ha2x3h - >>>> ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t >>>> know the ha2x1d. >>>> >>>> On your point 1): yes, for effectively carrying out the spin-up a >>>> temporary doubling of the scratch space should suffice. This will require >>>> some time/coordination in moving of output in-between jobs, but that should >>>> be fine. >>>> >>>> On point 2): I would like to have all of the BG-JG long-term stored. I >>>> estimate this to be ~75 TB. Finding a long-term storage space indeed would >>>> accommodate that. >>>> >>>> Laura >>>> >>>> >>>> On 9 Mar 2018, at 14:14, Bill Sacks wrote: >>>> >>>> Hi Laura, >>>> >>>> So, if I'm understanding this correctly: roughly speaking, you'll need >>>> a peak short-term storage space of close to 10 TB, and then longer-term >>>> storage of a few TB. Does that sound about right? (I'm a little confused by >>>> the value of 1.5 TB that you give for a restart set: that sounds awfully >>>> high for one set of restarts: I'd expect something more like 10s of GBs at >>>> most. But I'm not sure that the final numbers change based on revising that >>>> downwards.) >>>> >>>> So do you feel this can be accommodated by (1) temporarily doubling >>>> your scratch space, and then (2) finding a long-term storage space for a >>>> few TBs? >>>> >>>> Please let me know if I'm misunderstanding / misinterpreting this. >>>> >>>> Thanks, >>>> Bill S >>>> >>>> On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: >>>> >>>> Hi Bill, >>>> >>>> Those are good questions. I suppose it involves decision-making. >>>> >>>> 1a). Inside one BG-JG iteration, we need to keep the coupler and >>>> restart files from the BG. One set is ~1.5 TB. Pragmatically in case >>>> rerunning is necessary or we hit some other problem, I think it?s good to >>>> keep around in scratch the last set of BG restart and coupler files that >>>> made a successful JG. >>>> So minimum peak scratch requirement during the spin-up, a JG[n] is >>>> about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB >>>> BG[n-1]_forcing (in case of problems) = 8.2 TB >>>> >>>> 1b). After completion of one BG-JG iteration for short-term storage, I >>>> suppose you are referring to the analysis that needs to be done on the >>>> output? There are a number of variables that are good to keep the complete >>>> timeseries of, but mostly we are interested in the end state. I don?t know >>>> how much exactly, but to give a rough estimate; I expect it in the order of >>>> some 100s GB per simulation. >>>> >>>> 2). For the longer term, I would like to have it integrally stored on >>>> HPSS at least for a few years. >>>> >>>> >>>> Laura >>>> >>>> >>>> On 8 Mar 2018, at 16:46, Bill Sacks wrote: >>>> >>>> Hi Laura, >>>> >>>> Thanks very much for putting this together! >>>> >>>> I don't think /glade/p/cesm/liwg is a viable option right now: this >>>> falls under this nearly-full quota: >>>> >>>> /glade/p/cesm 198.48 TB 200.22 TB 99.13 % >>>> >>>> In order to determine the best space(s) for this, it would help to know: >>>> >>>> (1) How much of the data from one BG-JG iteration do you need to save >>>> once that iteration is complete? Is it really necessary to keep all of >>>> these data around, or can you delete most of it and (for example) just keep >>>> a set of restart files around to facilitate backing up or rerunning >>>> segments if you need to? >>>> >>>> (2) How much of these data need to be kept medium-long-term ? e.g., for >>>> a year, for a few years, or longer? >>>> >>>> About a year ago, CISL announced a new plan for data storage that I >>>> thought was supposed to give us all more disk space, but I haven't heard >>>> what (if anything) came of that. We can look into that if that would help. >>>> But first it would help to know how much of these data really need to be >>>> kept and for what length of time. >>>> >>>> Thanks, >>>> Bill S >>>> >>>> On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: >>>> >>>> >>>> Hi all, >>>> >>>> For the BG-JG spin-up, I put together a document on data space we >>>> require [estimate], and ways to facilitate the archiving. >>>> Please find attached. Feel welcome to comment. >>>> >>>> https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n- >>>> sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing >>>> >>>> Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? >>>> >>>> Laura >>>> >>>> _______________________________________________ >>>> Liwg-core mailing listLiwg-core at cgd.ucar.eduhttp://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> _______________________________________________ >>> Liwg-core mailing list >>> Liwg-core at cgd.ucar.edu >>> http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core >>> >>> >> >> >> -- >> Marcus L?fverstr?m (PhD) >> Post-doctoral researcher >> National Center for Atmospheric Research >> 1850 Table Mesa Dr. >> >> 80305 Boulder, CO, USA >> >> >> https://sites.google.com/site/lofverstrom/ >> >> >> > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > > > > _______________________________________________ > Liwg-core mailing list > Liwg-core at cgd.ucar.edu > http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core > > -- Marcus L?fverstr?m (PhD) Post-doctoral researcher National Center for Atmospheric Research 1850 Table Mesa Dr. 80305 Boulder, CO, USA https://sites.google.com/site/lofverstrom/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.vanKampenhout at uu.nl Thu Mar 15 15:42:55 2018 From: L.vanKampenhout at uu.nl (Kampenhout, L. van (Leo)) Date: Thu, 15 Mar 2018 21:42:55 +0000 Subject: [Liwg-core] BG-JG data space requirements In-Reply-To: <5AAAE24B.4060705@ucar.edu> References: <5AA15AF3.4030902@ucar.edu> <10D5C790-3B67-47F8-A65B-A095802F7914@tudelft.nl> <5AA288CB.4030603@ucar.edu> <0A3EB29F-9AE1-447D-AA59-D4AC62E67DCB@tudelft.nl> <5AA30342.3010408@ucar.edu> <5AA7C94A.4070409@ucar.edu> <578D59F0-1320-4C39-9FC2-918164BD39EC@tudelft.nl> <5AA7F6EB.1010403@ucar.edu> <8ABAC31A-418F-4DE5-BE47-C58441F370E5@uu.nl> <5AAAE24B.4060705@ucar.edu> Message-ID: Thanks Bill, good to know. On 15 Mar 2018, at 22:14, Bill Sacks > wrote: If I remember correctly, Leo's statement just applies to restart files written mid-run, whereas I think Jeremy is referring to restart files written at the end of a run, from a few runs ago. Leo, I think you can achieve what you want by setting DOUT_S_SAVE_INTERIM_RESTART_FILES to TRUE (an xml variable). Bill On 3/15/18, 3:01 PM, Kampenhout, L. van (Leo) wrote: hi all, I just wanted to chime in on one particular thing that Jeremy said: And no I didn't ever assess the size of the coupler files. Also, it is useful (but not critical) in practice to keep interim restart files on disk to rewind simulations as needed at short notice. In my experience interim restart files get deleted when running st_archive , the short-term archiver. @Bill S, do you know why this is and how to turn that off? This may not be a problem when your restart frequency equals the stopping frequency. Leo On 13 Mar 2018, at 17:22, Laura Muntjewerf - CITG > wrote: Hi all, Thanks Bill, that seems pragmatic. @Marcus, size of individual daily coupler files: - 36 MB for ha2x3h - 21 MB for ha2x1hi - 11 MB for ha2x1h Laura On 13 Mar 2018, at 17:06, Bill Sacks > wrote: Hi all, It feels like there is some convergence on more accurate requirements, though I'm not following this closely enough to be able to come up with those numbers myself. Once someone puts together a revised estimate, I'd suggest running it by Gary Strand. Bill On 3/13/18, 9:53 AM, Laura Muntjewerf - CITG wrote: Hi all, Thanks Marcus, for the numbers you made. That is quite a reduction in output size from #288. For the size of the coupler files, that should be 0.75 TB. Based on a 30-year B-run code #260 [the sum of ha2x3h - ha2x1h - ha2x1h]. Earlier I said 1.5 TB but I double-counted (I merged them to use as forcing files, and took the size of the entire folder; I didn?t delete the original files). Sorry for the confusion. Bill, yes, you are right that it?s not sensible to want to keep the entire BG-JG spin-up in long-term storage. It is, I feel precautious because I don?t have much sense on what is important to save and what not - but that cannot be an argument. On this matter, I am happy there is so much input in this email thread. Laura On 13 Mar 2018, at 15:28, Jeremy Fyke > wrote: Hey, I think those estimates for JG/BG storage costs are roughly 10x what I experienced with out of box settings. Suspect that 288 is using enhanced high frequency output to diagnose bugs, etc? And no I didn't ever assess the size of the coupler files. Also, it is useful (but not critical) in practice to keep interim restart files on disk to rewind simulations as needed at short notice. Jer On Tue, Mar 13, 2018 at 5:51 AM Bill Sacks > wrote: Thanks a lot for this, Marcus. Bill On 3/12/18, 5:48 PM, Marcus Lofverstrom wrote: Hi Laura + Bill + Jeremy, Sorry for coming in late to this discussion, I was out of the office last week. We can easily reduce the output size of the monthly history files, as only a small number of variables are needed to ensure that the model climate is behaving as it should. Similarly, we only need to save the restart files from the end of each JG or BG run segment, which will help as well (see below). We do however need to store the high-frequency coupler history data locally, as it is used to drive the "next" JG segment. This dataset is quite large as several (2D) fields are written a few times per day over ~30 years. The good news is that we can move it from /glade to some other temporary (or indeed permanent) storage space when starting the next BG segment. The bad news is that high-frequency output is quite noisy and therefore is not compressing particularly well. Jeremy, do you have a sense for how large the coupler history data is from each BG segment? Some work has been devoted to reduce the output size of the monthly history output in the last few months. These numbers are based on the standard output from sim #288: CAM: 947MB per month -- 30 years is ~340 GB 2D: 242 variables 3D: 132 variables POP: 1.2 GB per month -- 30 years is ~432 GB; 150 years is ~2.16 TB 2D: 147 variables 3D: 58 variables CLM: 282M per month -- 30 years is ~101 GB; 150 years is ~508 GB 2D: 422 variables 3D: 35 variables ICE: 76 MB per month -- 30 years is ~27 GB; 150 years is ~137 GB 2D: 104 variables 3D: 12 variables GLC: 90MB per year -- 30 years is ~2.7 GB; 150 years is ~13.5 GB ROF: 9.9MB per month -- 30 years is ~3.5 GB; 150 years is ~18 GB Restart: 13 GB per write One BG run (30 years of all components + one set of restart files): ~920 GB One JG run (150 years of all components but the atmosphere + one set of restart files): ~2.85 TB 8 x BG (30 years + 1 x rest) + 8 x JG (150 years + 1 x rest) = ~30.1 TB Note that these numbers show the uncompressed size of the standard output. There are a lot of fields that we don't need in order to making sure that the model climate is behaving well. Big savings can be made by reducing the number of vertically resolved fields and only saving variables that we know that we want to look at and/or are used by the diagnostic packages. I think we can easily reduce the output size by a factor of 10, if not more. I have started a shared document (you should be able to edit) listing fields we want to save. I will work on creating a more complete list when Cheyenne is back online. https://docs.google.com/document/d/10Zi7fDbaO06lkwd6N8mM1RtY1E79Y3mVbRxpuv3nc14/edit?usp=sharing Best, Marcus On Fri, Mar 9, 2018 at 2:57 PM, Bill Sacks > wrote: Hi Laura, I'm having trouble reconciling all the numbers here, so still feel I'm not totally understanding this. So I'll reply a bit more generally rather than trying to address specific numbers: In general, we're being asked by CISL (the computing group) to think carefully about what we really need to store long-term. Disk space (and HPSS storage) is at a higher premium than processor time these days. So along those lines, I wonder if it would be possible to reduce the long-term storage needs by reducing the number of variables output by various components (especially multi-level variables) and/or output frequency ? e.g., outputting most variables as annual rather than monthly averages for much of the spin-up period? I believe the land group uses one or both of these strategies when doing its land-only spinups. It would also be worth writing to Gary Strand . He's the person here who has the best sense of the storage landscape, and could perhaps give some suggestions for how to proceed. Bill S On 3/9/18, 9:16 AM, Jeremy Fyke wrote: Hi all In my experience it?s best to keep at least 10TB of disc available for JG/BG. This covers retention of all run files for a few iterations. Most necessary to retain are the frequent coupler history files from the BG to drive the next JG, and the set of restart files needed for the next iteration JG or BG). But for analysis it?s nice to have a few iterations of history files available on demand. I?m sure one could more carefully parse things to reduce the files needed on disk. In my experience though I just ended up accidentally archiving needed files when I tried to pick and choose. So Id just recommend keeping the last few iterations on disk in full (as well as archiving things promptly). Jer On Fri, Mar 9, 2018 at 6:08 AM Laura Muntjewerf - CITG > wrote: Hi Bill, Thanks for your reply. Firstly let me say that, as far as I?m aware, Marcus will be running. I am making numbers, just for practical reasons because I was looking into it before. Regarding your question on this 1.5 TB that needs to be kept from a BG: this is the restart files but also 30 years of coupler files (ha2x3h - ha2x1h - ha2x1h - ha2x1d). I am probably overestimating because I don?t know the ha2x1d. On your point 1): yes, for effectively carrying out the spin-up a temporary doubling of the scratch space should suffice. This will require some time/coordination in moving of output in-between jobs, but that should be fine. On point 2): I would like to have all of the BG-JG long-term stored. I estimate this to be ~75 TB. Finding a long-term storage space indeed would accommodate that. Laura On 9 Mar 2018, at 14:14, Bill Sacks > wrote: Hi Laura, So, if I'm understanding this correctly: roughly speaking, you'll need a peak short-term storage space of close to 10 TB, and then longer-term storage of a few TB. Does that sound about right? (I'm a little confused by the value of 1.5 TB that you give for a restart set: that sounds awfully high for one set of restarts: I'd expect something more like 10s of GBs at most. But I'm not sure that the final numbers change based on revising that downwards.) So do you feel this can be accommodated by (1) temporarily doubling your scratch space, and then (2) finding a long-term storage space for a few TBs? Please let me know if I'm misunderstanding / misinterpreting this. Thanks, Bill S On 3/9/18, 5:32 AM, Laura Muntjewerf - CITG wrote: Hi Bill, Those are good questions. I suppose it involves decision-making. 1a). Inside one BG-JG iteration, we need to keep the coupler and restart files from the BG. One set is ~1.5 TB. Pragmatically in case rerunning is necessary or we hit some other problem, I think it?s good to keep around in scratch the last set of BG restart and coupler files that made a successful JG. So minimum peak scratch requirement during the spin-up, a JG[n] is about to finish: 1.5 TB BG[n]_forcing + 5.2 TB JG[n] + 1.5 TB BG[n-1]_forcing (in case of problems) = 8.2 TB 1b). After completion of one BG-JG iteration for short-term storage, I suppose you are referring to the analysis that needs to be done on the output? There are a number of variables that are good to keep the complete timeseries of, but mostly we are interested in the end state. I don?t know how much exactly, but to give a rough estimate; I expect it in the order of some 100s GB per simulation. 2). For the longer term, I would like to have it integrally stored on HPSS at least for a few years. Laura On 8 Mar 2018, at 16:46, Bill Sacks > wrote: Hi Laura, Thanks very much for putting this together! I don't think /glade/p/cesm/liwg is a viable option right now: this falls under this nearly-full quota: /glade/p/cesm 198.48 TB 200.22 TB 99.13 % In order to determine the best space(s) for this, it would help to know: (1) How much of the data from one BG-JG iteration do you need to save once that iteration is complete? Is it really necessary to keep all of these data around, or can you delete most of it and (for example) just keep a set of restart files around to facilitate backing up or rerunning segments if you need to? (2) How much of these data need to be kept medium-long-term ? e.g., for a year, for a few years, or longer? About a year ago, CISL announced a new plan for data storage that I thought was supposed to give us all more disk space, but I haven't heard what (if anything) came of that. We can look into that if that would help. But first it would help to know how much of these data really need to be kept and for what length of time. Thanks, Bill S On 3/8/18, 8:10 AM, Laura Muntjewerf - CITG wrote: Hi all, For the BG-JG spin-up, I put together a document on data space we require [estimate], and ways to facilitate the archiving. Please find attached. Feel welcome to comment. https://docs.google.com/document/d/1Zch9-xGPwJ9sHwYBz6n-sBhsJHUptrzdPsHF1Nu6eIU/edit?usp=sharing Now I come to think of it, maybe the /glade/p/cesm/liwg is an option? Laura _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -- Marcus L?fverstr?m (PhD) Post-doctoral researcher National Center for Atmospheric Research 1850 Table Mesa Dr. 80305 Boulder, CO, USA https://sites.google.com/site/lofverstrom/ _______________________________________________ Liwg-core mailing list Liwg-core at cgd.ucar.edu http://mailman.cgd.ucar.edu/mailman/listinfo/liwg-core -------------- next part -------------- An HTML attachment was scrubbed... URL: