EAGLE_6 crashes when running with MPI on 50 top level cells
The EAGLE_6
volume crashes with:
[0000] [00000.5] engine_config: Absolute minimal timestep size: 6.938894e-20
[0000] [00000.5] engine_config: Minimal timestep size (on time-line): 7.450580e-11
[0000] [00000.5] engine_config: Maximal timestep size (on time-line): 7.812500e-05
[0000] [00000.5] engine_config: Restarts will be dumped every 6.000000 hours
[0000] [00000.6] main: engine_init took 43.659 ms.
[0000] [00000.6] main: Running on 0 gas particles, 0 star particles and 830584 DM particles (830584 gravity particles)
[0000] [00000.6] main: from t=0.000e+00 until t=1.000e-02 with 16 threads and 16 queues (dt_min=1.000e-10, dt_max=1.000e-04)...
[0000] [00008.2] engine_init_particles: Setting particles to a valid state...
[0000] [00008.2] engine_init_particles: Computing initial gas densities.
[0007] [00020.4] scheduler.h:scheduler_activate_send():146: Missing link to send task.
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
when configured with:
Config. options: '--with-metis'
Compiler: ICC, Version: 17.0.20170213
CFLAGS : '-idirafter /usr/include/linux -O3 -ansi_alias -xAVX -pthread -w2 -Wunused-variable -Wshadow -Werror'
HDF5 library version: 1.8.18
FFTW library version: 3.x (details not available)
GSL library version: 2.3
MPI library: Intel(R) MPI Library 2017 Update 2 for Linux* OS (MPI std v3.1)
METIS library version: 5.1.0
and run with:
mpirun -np 8 ../swift_mpi -G -t 16 eagle_6.yml -n 5 -P Scheduler:max_top_level_cells:50
It runs fine using:
mpirun -np 8 ../swift_mpi -s -G -t 16 eagle_6.yml -n 5 -P Scheduler:max_top_level_cells:50
mpirun -np 8 ../swift_mpi -s -t 16 eagle_6.yml -n 5 -P Scheduler:max_top_level_cells:50
../swift -G -t 16 eagle_6.yml -n 5 -P Scheduler:max_top_level_cells:50