A place to discuss the MulticoreBSP library and its applications, and for discussing the use of the Bulk Synchronous Parallel model on shared-memory architectures, or hybrid distributed/shared architectures.
You are not logged in.
It is my pleasure to announce the third update to the MulticoreBSP for C library. Version 1.2 brings improved pinning support for nested BSP runs, which benefits code explicitly following the Multi-BSP model (as opposed to flat BSP). C++ support has been extended by the addition of templated BSPlib primitives, which provides escape from having to deal explicitly with byte sizes and byte offsets.
Smaller improvements consider documentation, internal data structures, and compilation support; the latter now include release-testing using the clang LLVM compiler, next to GCC and the Intel C++ Compiler. Several bugs from version 1.1.0 have been resolved as well; I like to thank Jing Fan and Joshua Moerman for reporting some of these. As always, the changelog contains more details.
Version 1.2.0 can be downloaded from the following URL: http://multicorebsp.com/download/c/
Explicitly writing parallel programs using the Multi-BSP model can be done using nested SPMD sections in MulticoreBSP for C. This is not a straightforward effort, and can easily lead to codes written for specific architectures. This hurts portability, reduces productivity, and negatively affects the ease of use BSP libraries are otherwise commonly known for. It is also at odds with the intention of (Multi-)BSP as an abstract bridging model.
To enable the implementation of Multi-BSP codes such that the produced codes are (1) valid for all Multi-BSP computers, (2) clearly and transparently structured, and (3) compatible with existing BSPlib code fragments, version 1.3 will support C++ extensions allowing for explicit Multi-BSP programming.
Many have requested that MulticoreBSP for C be deployable over distributed-memory architectures. The original plan was to handle this problem simultaneously with the addition of automatic global barrier avoidance, pipelined communication, and fault-tolerance. It remains, however, unclear in what time-frame I will be able to address these important issues. From version 2.0 on, I will hence first make MulticoreBSP for C a fully hybrid systems, such that any BSP program automatically uses MPI for inter-node process coordination, while using PThreads for intra-node threading. These extensions will remain adhering to the updated BSPlib standard as published with the introduction of MulticoreBSP for C; in particular, nested BSP runs will remain possible, which in turn allows the Multi-BSP C++ extensions that will be introduced in Version 1.3 to be deployable over any MPI-supporting cluster or supercomputer.
A secondary target is to not interfere with existing threading and parallel programming interfaces; advanced users of version 2.0 of MulticoreBSP for C will be able to mix their (Multi-)BSP codes with any existing MPI, OpenMP, or Cilk Plus codes that they may already have.