SMPI offline simulation is inconsistent with the online simulation (deadlocks / message truncation)
TL;DR: I am doing an online simulation with SMPI to generate a trace and then replay it as an offline simulation, still with SMPI. In some cases it does not work. This follows the discussion we had on framateam.
This bug can be reproduced with the current version of Simgrid (commit 9985ff251
) by following the procedure for HPL.
Replace the HPL.dat
file in bin/SMPI
by this one:
HPLinpack benchmark input file
Innovative Computing Laboratory, University of Tennessee
HPL.out output file name (if any)
6 device out (6=stdout,7=stderr,file)
1 # of problems sizes (N)
2000 Ns
1 # of NBs
100 NBs
0 PMAP process mapping (0=Row-,1=Column-major)
1 # of process grids (P x Q)
2 Ps
2 Qs
16.0 threshold
1 # of panel fact
0 PFACTs (0=left, 1=Crout, 2=Right)
1 # of recursive stopping criterium
2 NBMINs (>= 1)
1 # of panels in recursion
2 NDIVs
1 # of recursive panel fact.
0 RFACTs (0=left, 1=Crout, 2=Right)
1 # of broadcast
0 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM)
1 # of lookahead depth
0 DEPTHs (>=0)
2 SWAP (0=bin-exch,1=long,2=mix)
64 swapping threshold
0 L1 in (0=transposed,1=no-transposed) form
0 U in (0=transposed,1=no-transposed) form
1 Equilibration (0=no,1=yes)
8 memory alignment in double (> 0)
Run the online simulation:
smpirun -np 4 -hostfile $PLATFORMDIR/cluster_hostfile.txt -platform $PLATFORMDIR/cluster_crossbar.xml -trace-ti --cfg=tracing/filename:HPL_trace --cfg=smpi/display-timing:yes ./xhpl
Then run the offline simulation:
smpirun -np 4 -hostfile $PLATFORMDIR/cluster_hostfile.txt -platform $PLATFORMDIR/cluster_crossbar.xml -trace-ti -replay HPL_trace
This last command produces the following error:
[0.044019] ../src/kernel/EngineImpl.cpp:843: [ker_engine/CRITICAL] Oops! Deadlock or code not perfectly clean.
[0.044019] [ker_engine/INFO] 4 actors are still running, waiting for something.
[0.044019] [ker_engine/INFO] Legend of the following listing: "Actor <pid> (<name>@<host>): <status>"
[0.044019] [ker_engine/INFO] Actor 1 (0@host-0.hawaii.edu): waiting for communication activity 0x55e68b4ee4f0 () in state 0 to finish
[0.044019] [ker_engine/INFO] Actor 2 (1@host-1.hawaii.edu): waiting for communication activity 0x55e68b45d3e0 () in state 0 to finish
[0.044019] [ker_engine/INFO] Actor 3 (2@host-2.hawaii.edu): waiting for communication activity 0x55e68b45ec60 () in state 0 to finish
[0.044019] [ker_engine/INFO] Actor 4 (3@host-3.hawaii.edu): waiting for communication activity 0x55e68b4506f0 () in state 0 to finish
[host-0.hawaii.edu:0:(1) 0.044019] ../src/instr/instr_paje_containers.cpp:104: [root/CRITICAL] container with name rank-1 not found
Backtrace (displayed in actor 0):
-> 0# xbt_backtrace_display_current at ../src/xbt/backtrace.cpp:30
-> 1# simgrid::instr::Container::by_name(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) at ../src/instr/instr_paje_containers.cpp:101
-> 2# smpi_container(long) at ../src/smpi/internals/instr_smpi.cpp:101
-> 3# std::_Function_handler<void (bool), TRACE_smpi_init(long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::{lambda(bool)#1}>::_M_invoke(std::_Any_data const&, bool&&) at /usr/include/c++/9/bits/std_function.h:302
-> 4# simgrid::kernel::actor::ActorImpl::cleanup() at /usr/include/c++/9/bits/stl_vector.h:909
-> 5# simgrid::kernel::context::SwappedContext::stop() at ../src/kernel/context/ContextSwapped.cpp:185
-> 6# simgrid::kernel::actor::ActorImpl::yield() at ../src/kernel/actor/ActorImpl.cpp:289
-> 7# simcall_comm_wait(simgrid::kernel::activity::ActivityImpl*, double) at ../src/simix/libsmx.cpp:156
-> 8# simgrid::smpi::Request::wait(simgrid::smpi::Request**, MPI_Status*) at ../src/smpi/mpi/smpi_request.cpp:1030
-> 9# simgrid::smpi::Request::send(void const*, int, simgrid::smpi::Datatype*, int, int, simgrid::smpi::Comm*) at ../src/smpi/mpi/smpi_request.cpp:372
-> 10# simgrid::smpi::replay::SendAction::kernel(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&) at ../src/smpi/internals/smpi_replay.cpp:466
-> 11# std::_Function_handler<void (std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&), smpi_replay_init::{lambda(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&)#6}>::_M_invoke(std::_Any_data const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&) at /usr/include/c++/9/bits/std_function.h:300
-> 12# simgrid::xbt::handle_action(std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&) at ../src/xbt/xbt_replay.cpp:102
-> 13# simgrid::xbt::replay_runner(char const*, char const*) at ../src/xbt/xbt_replay.cpp:142
-> 14# smpi_replay_main at ../src/smpi/internals/smpi_replay.cpp:870
-> 15# 0x00007F94DF4C74E4 in /tmp/smpireplaymain_30823_0.so
/usr/local/lib/simgrid/smpireplaymain --cfg=smpi/privatization:1 --cfg=tracing:yes --cfg=tracing/filename:smpi_simgrid.txt --cfg=tracing/smpi:yes --cfg=tracing/smpi/format:TI --cfg=tracing/smpi/computing:yes --cfg=surf/precision:1e-9 --cfg=network/model:SMPI --cfg=smpi/tmpdir:/tmp /home/tom/SMPI-proxy-apps/src/common/cluster_crossbar.xml
Execution failed with code 134.
Another issue (maybe related) can be obtained my modifying the NB
parameter in the HPL.dat
file to 4000 and repeating the two previous commands (online and offline simulation). Now, we obtain this error:
[host-1.hawaii.edu:1:(2) 4.652516] ../src/smpi/mpi/smpi_request.cpp:994: [root/CRITICAL] recv - returned MPI_ERR_TRUNCATE instead of MPI_SUCCESS