Skip to content

[Refactor] Remove unnecessary manual MPI_Reduce in hsolver dav code#7314

Merged
mohanchen merged 1 commit intodeepmodeling:developfrom
Cstandardlib:refactor/reduce
May 7, 2026
Merged

[Refactor] Remove unnecessary manual MPI_Reduce in hsolver dav code#7314
mohanchen merged 1 commit intodeepmodeling:developfrom
Cstandardlib:refactor/reduce

Conversation

@Cstandardlib
Copy link
Copy Markdown
Collaborator

Summary

  • Remove unnecessary manual swap buffer and MPI_Reduce calls in diago_dav_subspace.cpp and diago_david.cpp
  • reduce_pool template already has std::complex<float> and std::complex<double> instantiations with MPI_IN_PLACE, making manual buffer management redundant
  • This simplifies the code by ~75 lines while maintaining the same functionality

Related Issues

Fixes #7313

Changes

source_hsolver/diago_dav_subspace.cpp

  • Removed manual swap buffer allocation and syncmem_complex_op call
  • Removed std::is_same<T, double>::value type check
  • Removed manual MPI_Reduce calls for complex types
  • Direct use of Parallel_Reduce::reduce_pool for both hcc and scc

source_hsolver/diago_david.cpp

  • Same simplification as above for hcc reduce operation

Testing

  • No functional changes; existing tests should pass
  • The refactored code uses the same reduce_pool function that was already used for double type

Copilot AI review requested due to automatic review settings May 6, 2026 09:44
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the Davidson-based hsolver MPI reduction logic by removing manual temporary buffers and explicit MPI_Reduce calls, and instead consistently using Parallel_Reduce::reduce_pool for reducing hcc/scc contributions across MPI ranks.

Changes:

  • Simplified MPI reduction paths in Davidson solvers by deleting manual swap buffer management and explicit MPI_Reduce calls.
  • Unified reduction calls to Parallel_Reduce::reduce_pool(...) for both real and complex builds in DiagoDavid::cal_elem and Diago_DavSubspace::cal_elem.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
source/source_hsolver/diago_david.cpp Replaces manual complex reduction logic with Parallel_Reduce::reduce_pool during hcc accumulation.
source/source_hsolver/diago_dav_subspace.cpp Applies the same reduction simplification for hcc and scc accumulation.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread source/source_hsolver/diago_david.cpp
Comment thread source/source_hsolver/diago_dav_subspace.cpp
reduce_pool template already has single precision (std::complex<float>) instantiation with MPI_IN_PLACE, so manually creating swap buffers and calling MPI_Reduce for complex types is unnecessary.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Copy link
Copy Markdown
Collaborator

@mohanchen mohanchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mohanchen mohanchen added GPU & DCU & HPC GPU and DCU and HPC related any issues Diago Issues related to diagonalizaiton methods Refactor Refactor ABACUS codes labels May 7, 2026
@mohanchen mohanchen merged commit 22c985e into deepmodeling:develop May 7, 2026
15 checks passed
@Cstandardlib Cstandardlib deleted the refactor/reduce branch May 7, 2026 02:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Diago Issues related to diagonalizaiton methods GPU & DCU & HPC GPU and DCU and HPC related any issues Refactor Refactor ABACUS codes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

No need to do MPI_Reduce manually with IN_PLACE available in hsolver/dav code

3 participants