Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • T Trilinos
  • Project information
    • Project information
    • Activity
    • Labels
    • Planning hierarchy
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 936
    • Issues 936
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 22
    • Merge requests 22
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • James Willenbring
  • Trilinos
  • Issues
  • #1275

Closed
Open
Created Apr 27, 2017 by James Willenbring@jmwilleMaintainer

MPI scaling issues in solvers (triaged, long term, epic)

Created by: mhoemmen

@trilinos/tpetra

Issue reported by @spdomin and currently tracked elsewhere.

Stub summary: Some MPI implementations consumes buffer space on a process proportional to the number of processes with which that process exchanges point-to-point messages. Buffers grow dynamically, but the MPI implementation annoyingly never returns that memory. This puts a burden on applications to prefer collectives to point-to-point messages, and in general to keep the in- and out-degree of point-to-point messages on a process low.

Assignee
Assign to
Time tracking