Opened 7 years ago

Closed 7 years ago

Last modified 3 years ago

#1231 closed Task (invalid)

Numeric problem along the north fold

Reported by: ufla Owned by: nemo
Priority: low Milestone:
Component: OCE Version: release-3.6
Severity: Keywords:
Cc: laurent@… Review:
MP ready?:
Progress:

Description

While testing the dev_MERGE_2013 branch (revision 4367 for now) with ORCA1/LIM, we encounter aborts with high zonal velocities. The problem has been reproduced by two different users on two different machines.

What we know so far about the problem is:

  • occurs with ORCA1 but not with ORCA2
  • occurs with both LIM2 and LIM3 (so we do not assume a LIM bug for now)
  • occurs with different MPI domain decompositions as well as without key_mpp_mpi (single core run)
  • occurs always after few (6) of the hourly time steps

The problem looks always the same:

  stpctl: the zonal velocity is larger than 20 m/s
  ====== 
 kt=     6 max abs(U):   49.71    , i j k:    98  291   19

When we look at the data of output.abort we always see a specific pattern (see attached file vozocrtx.output.abort.7x5.png).

A clue might be that we see a difference in the mesh mask (written out by NEMO) when compared to the bathymetry file that is used. Attached is a NetCDF where the bathymetry data, the tmask variable from mesh_mask, and the difference of the two (in terms of 0/1 values) is included. There is a difference in the two top most rows that we are concerned about.

We are prepared to do more tests and provide more information as needed.

Thanks in advance for helping!

Commit History (0)

(No commits)

Attachments (2)

vozocrtx.output.abort.7x5.png (35.8 KB) - added by ufla 7 years ago.
mesh-diff.nc (6.1 MB) - added by ufla 7 years ago.

Change History (8)

Changed 7 years ago by ufla

Changed 7 years ago by ufla

comment:1 Changed 7 years ago by ufla

  • Type changed from Defect to Development branch

comment:2 Changed 7 years ago by acc

I have an ORCA1-LIM2, CORE2 forced configuration running with v3.6alpha that doesn't appear to suffer this problem. Admittedly, I haven't looked at the output in detail but I can run 5 days (120 timesteps) without problems. This looks to be a problem with your particular configuration rather than a code problem. Possibly jperio is set incorrectly? ORCA1 needs jperio=6

comment:3 Changed 7 years ago by clevy

I have the same kind of problem, but not in the same place: for me it happens in the Antarctic, at the ice edge. I'm pretty convinced at this stage that it is related to GLS, and when I switch back to TKE, it runs fine.
1) Uwe, Andrew may be right because since v3_6 some domain variables have been added to the namelist. Indeed, you should check the ones for ORCA1 in CONFIG/SHARED/README_configs_namcfg_namdom, and check they are in your namelist_cfg
2) and maybe confirm (or not) my guess: if you re using GLS, try replacing by TKE
… This does not solve the GLS problem

comment:4 Changed 7 years ago by ufla

Hi, thanks for your answers!

I missed indeed some of the added namelist files and setting jperio=6 did the trick! Thanks again for your prompt response!

Should I be closing the ticket myself or is someone of the NEMO team taking care of that?

comment:5 Changed 7 years ago by ufla

  • Resolution set to invalid
  • Status changed from new to closed

Closing the issue as there is nothing to fix in the code.

comment:6 Changed 3 years ago by nemo

  • Type changed from Development to Task

Remove 'Development' type

Note: See TracTickets for help on using tickets.