Up to Installation and compilation
Hi all, I successfully ran my coupled model on blizzard@dkrz in Hamburg but I have now some problems running it on our dwd computer (cray xc30). I use the following commands : - on Blizzard : # add nproc_atm lines for ocean model yes ./balocx | head -n 16 > cmdfile # add nproc_atm lines for atmospheric model yes ./cosmoc | head -n 16 >> cmdfile ### Start the coupled model using the created cmdfile in MPMD mode poe -pgmmodel mpmd -cmdfile cmdfile - on Cray : aprun -n 20 cosmoc : -n 20 balocx I only have the nout file and a core file. I was wondering if you knew some people working on such kind of computer ? Thanks, Best regards, Jennifer
Hi Jennifer, At the Met Office we're running on a Cray XC40 which is probably not too dissimilar to your machine except perhaps in the number of PEs per node (we mostly have 32 per node). In case it's of any help, our aprun command might look something like the following: aprun -ss -n 128 -N 32 -S 16 -d 1 -j 1 toyatm : -ss -n 480 -N 32 -S 16 -d 1 -j 1 toyoce : -ss -n 8 -N 8 -d 1 -j 1 xios.x So in this case we're running the atmosphere ("toyatm") on 128 processes and ocean ("toyoce") on 480. (NOTE: we have the NEMO IO server, "xios.x" as a 3rd executable in the above line which is running on 8 procs - you probably don't care about this part). Richard
Hello, I have found my error finally! The name of the executable is hardcoded inside my Nemo version (really stupid). So I have changed the name for the run, and this caused the problem. I have found it by switching the order of the executables in the arun line and then I had a nice error message. I will change the hardcoded part, this is not acceptable. Many thanks for help/concern, Jenny Brauch
HI Jennifer, Currently we use Cray Fortran : Version 8.3.4 We also have the option to use the Intel IFORT compiler but can't do that routinely at the moment because there are certain things which IFORT doesn't like about some parts of our model code (I don't think there are any OASIS3-MCT related issues.) Here's an example of the command line for compiling one of the routines in our model, though this is from a debugging run so the -O0 and -Ovector1 will obviously not be generally needed: ftn -oo/oasis3_atmos_init_mod.o -c -I./include -I/data/d02/shaddad/projects/moci_central/modules//packages/oasis3-mct/1.0/build/lib/psmile.MPI1 -s default64 -e m -J ./include -I/projects/um1/gcom/gcom5.2/meto_xc40_cray_mpp/build/include -I/projects/um1/grib_api/cce-8.3.4/1.13.0/include -O0 -Ovector1 -hfp0 -hflex_mp=strict -G1 -h omp /um/src/control/coupling/oasis3_atmos_init_mod.F90 Hope that helps. Richard
Hi Richard, Many thanks for your answer ! I have to specify my processor distribution in the way you do it, too, but so far, I haven't optimized it yet. And yes, we have less processors per node. Could you tell me, which compiler you use and what compiler options? This would be very helpful for me. I struggle with the cray compiler at the moment. Regards, Jennifer