15Jul/11Off

CULA Sparse beta applications now being accepted

by Kyle

The CULA team is pleased to announce that we are now accepting applications for our upcoming GPU accelerated sparse linear algebra library: CULA Sparse. Separate from our "CULAPACK" library for dense matrices, CULA Sparse features solves and routines designed to work with sparse matrices.

Our first release (Beta1) will feature a handful of iterative solvers (CG, BiCG, BiCG-Stab, GMRES), various storage formats (CSR, CSC, COO), and associated preconditioners (Jacobi, ILU). Future beta versions will introduce more solvers, storage formats, preconditioners, and features.

Please visit our application page to apply for admittance into the beta program. If selected, we'll notify you by e-mail and update your CULA account with access to the CULA Sparse installer.

26May/11Off

CULA R12 (CUDA 4.0) Now Available

by John

We are very pleased to announce that CULA R12, based on CUDA 4.0, is available immediately at our downloads page.

Besides CUDA 4.0 support, this release also introduces the new link-compatible interface, which allows for zero-effort porting of existing codes which use LAPACK routines. For more information, please see the documentation included or read our blog post about the feature.

5May/11Off

CULA Added to TSUBAME 2.0

by Liana

We are excited to share that the engineers in the TSUBAME 2.0 team have chosen to add our library CULA to their software stack!

Here is what Professor Satoshi Matsuoka, TSUBAME 2.0 project leader, had to say about it:

"The majority of the achievable FLOPS in TSUBAME2.0 is due to the power of the GPUs, so it is essential that we provide as comprehensive a software stack to utilize them to their fullest potential as possible. CULA will be an extremely valuable part of the portfolio, allowing our scientists to conduct large scale simulations at unprecedented speeds."

Tokyo Tech signed up for  a 4-year site license and this means that users of their supercomputer will rely on having access to the most current version of CULA.

We would like to recognize here Katsuya Nishi, CEO of Best Systems, who facilitated this effort with Tokyo Tech.  Best Systems is one of our major CULA resellers in Japan and we look forward to a long-lasting, mutually-beneficial relationship with their team and customers.

“I believe there is a large need for a GPU-optimized linear algebra library such as CULA in Japan,” said Katsuya Nishi, CEO of Best Systems.  “Tsubame 2.0 is a great example of how the Japanese scientific community has embraced GPGPU computing on a petaflop scale. The trend is for an even greater adoption of GPUs across all major segments, including industry, government and higher education. Having a comprehensive GPU library like CULA in our portfolio gives us a great competitive edge.”

If you'd like to read the full press release, please check out our news section. We will soon be making another announcement related to our partnerships in Japan, so... stay tuned for more.