MPI scaling issues in solvers (triaged, long term, epic)
Created by: mhoemmen
@trilinos/tpetra
Issue reported by @spdomin and currently tracked elsewhere.
Stub summary: Some MPI implementations consumes buffer space on a process proportional to the number of processes with which that process exchanges point-to-point messages. Buffers grow dynamically, but the MPI implementation annoyingly never returns that memory. This puts a burden on applications to prefer collectives to point-to-point messages, and in general to keep the in- and out-degree of point-to-point messages on a process low.