Managed Projects

NWChem

  No analysis available

NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.

0 lines of code

19 current contributors

0 since last commit

6 users on Open Hub

Activity Not Available
0.0
 
I Use This
Mostly written in language not available
Licenses: ecl2

mukautuva

  Analyzed 26 days ago

Adapting to multiple MPI ABIs

18.8K lines of code

0 current contributors

3 months since last commit

1 users on Open Hub

Low Activity
0.0
 
I Use This
Licenses: No declared licenses

OSPRI

  Analyzed about 21 hours ago

One-Sided PRImitives

1.56M lines of code

0 current contributors

over 7 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This

Argonne 1-sided (A1)

  Analyzed about 11 hours ago

A1 is a completely new implementation of ARMCI-like one-sided communication, i.e. an alternative runtime system for the Global Arrays Toolkit, that supports Blue Gene/P via DCMF.

99.2K lines of code

0 current contributors

about 10 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This

MPI Quality of Implementation Tests

  Analyzed about 4 hours ago

MPI Quality of Implementation Tests

1.73K lines of code

0 current contributors

over 7 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This
Tags mpi

ARMCI-MPI

  Analyzed about 21 hours ago

ARMCI-MPI is an MPI-based implementation of the ARMCI runtime system. Both MPI-2 and MPI-3 are supported.

44.1K lines of code

2 current contributors

19 days since last commit

1 users on Open Hub

Very Low Activity
0.0
 
I Use This

BigMPI

  Analyzed about 19 hours ago

Interface to MPI for large messages (count>INT_MAX)

11.6K lines of code

0 current contributors

about 1 year since last commit

1 users on Open Hub

Very Low Activity
0.0
 
I Use This
Tags mpi mpi_3

OSHMPI: OpenSHMEM over MPI-3

  Analyzed about 12 hours ago

OpenSHMEM over MPI-3

24.7K lines of code

0 current contributors

over 2 years since last commit

1 users on Open Hub

Inactive
0.0
 
I Use This

PRK

  Analyzed about 1 hour ago

This is a set of simple programs that can be used to explore the features of a parallel platform.

113K lines of code

4 current contributors

4 days since last commit

1 users on Open Hub

Very Low Activity
0.0
 
I Use This

m-a-d-n-e-s-s

  Analyzed about 15 hours ago

MADNESS provides a high-level environment for the solution of integral and differential equations in many dimensions using adaptive, fast methods with guaranteed precision based on multi-resolution analysis and novel separated representations. There are three main components to MADNESS. At the ... [More] lowest level is a new petascale parallel programming environment that increases programmer productivity and code performance/scalability while maintaining backward compatibility with current programming tools such as MPI and Global Arrays. The numerical capabilities built upon the parallel tools provide a high-level environment for composing and solving numerical problems in many (1-6+) dimensions. Finally, built upon the numerical tools are new applications with initial focus upon chemistry, atomic and molecular physics, material science, and nuclear structure. Please look in the wiki for more information and project activity. Getting the sourceAnonymous, read-only source checkout: svn checkout http://m-a-d-n-e-s-s.googlecode.com/svn/local/trunk m-a-d-n-e-s-s-read-onlyDevelopers, please see the wiki Subversion page for instructions. Underneath the hoodIf you would like a glimpse at what's going on under the hood have a look at this call graph generated using the Google perftools. It nicely shows how work is funneled through the task-queue and how about 50% of the time is spent in the optimized matrix routines. The calculation computed the energy and gradient for di-nitrogen using the local density approximation on a two-core Thinkpad x61t. FundingThe developers gratefully acknowledge the support of the Department of Energy, Office of Science, Office of Basic Energy Sciences and Office of Advanced Scientific Computing Research, under contract DE-AC05-00OR22725 with Oak Ridge National Laboratory. The developers gratefully acknowledge the support of the National Science Foundation under grant 0509410 to the University of Tennessee in collaboration with The Ohio State University (P. Sadayappan). The MADNESS parallel runtime and parallel tree-algorithms include concepts and software developed under this project. The developers gratefully acknowledge the support of the National Science Foundation under grant NSF OCI-0904972 to the University of Tennessee. The solid state physics and multiconfiguration SCF capabilities are being developed by this project. The developers gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) under subcontract from Argonne National Laboratory as part of the High-Productivity Computer Systems (HPCS) language evaluation project. [Less]

484K lines of code

7 current contributors

1 day since last commit

1 users on Open Hub

Moderate Activity
0.0
 
I Use This