22Sep/11Off

CULA Sparse Beta 2

by John

CULA Sparse Beta 2 is undergoing final packaging and testing to be sent out to our Beta testers very soon. This is a Feature update, with the following changes:

  • Added the BiCGSTAB solver
  • Added the BiCGSTAB(L) solver
  • Complex (Z) data types available for all solvers
  • Fortran module added
  • Configuration parameter to return best experienced solution
  • Maximum runtime configuration parameter
  • New example for Fortran interface
  • New example for MatrixMarket data
  • Several important bug fixes, as noted by the Beta testers

This release also contains the first steps towards interoperability with CULA for dense linear algebra, which some hybrid methods require. A user will now need to link cula_core and cula_sparse rather than just the sparse lib. Full interoperability will require CULA R13, which is also coming soon.

We are still accepting Sparse Beta applications, so register soon!

25Aug/11Off

CULA Sparse Beta Released

by Liana

Another big milestone achieved by our CULA team today: CULA Sparse Beta is now available!

We have just sent out emails to all of those who have signed up to participate in the Beta Program and will be looking forward to receiving their feedback on the work we have done so far. We want to hear about the problems that they are able to solve, what necessary functionality is missing in our package, how easy it was to use our library, etc.  In return for their feedback and suggestions, we will give participants a 25% discount when the final version is released, should they wish to buy a license.

If you would like to test our Sparse solvers, we are still accepting registrations.  When you sign up,  please provide as much information as possible.  You can read a previous post to learn all of the details on what this version includes, but here is a quick list of what you will find in this first release:

  • Cg, BiCg, and GMRES solvers
  • CSC, CSR, COO storage formats
  • Jacobi, Block Jacobi, ILU0 preconditioners
  • Double precision only
  • GPU solvers only
  • A single, C-style simple interface that is suitable for initial testing
  • Support for all the standard CUDA platforms; Linux 32/64, Win 32/64, OSX

Thanks everyone for participating and stay tuned for updates!

18Aug/11Off

Update for CULA Sparse Beta

by John

Hello all, we are nearing the official launch of the CULA Sparse Beta program. First off, I'd like to thank everyone who has submitted applications and to encourage others to so do as well here. Notifications for program acceptance will start going out at the beginning of next week, with the Beta 1 installers available very shortly thereafter. I would like to note that receiving an acceptance will require fully completing the form, which is quite short.

As for the package itself, the contents have been disclosed at other venues, but the final Beta 1 is list shaping up to include:

  • Cg, BiCg, and GMRES solvers
  • CSC, CSR, COO storage formats
  • Jacobi, Block Jacobi, ILU0¬†preconditioners
  • Double precision only
  • GPU solvers only
  • A single, C-style simple interface that is suitable for initial testing
  • Support for all the standard CUDA platforms; Linux 32/64, Win 32/64, OSX

Maybe the most surprising one on that list is that this first release will be double precision only. So please evaluate speed only on a suitable double precision platform, such as the Tesla 2000-series and Quadro 6000. Functionality can still be evaluated with a gaming card such as the GTX580 or lower end computing part such as the C1060 or Quadro 5800 as long as it is Compute 1.3 or greater. This will be covered in detail in the release notes as well.

We intend to grow the capabilities list dramatically in Beta revisions (see the release notes for CULA's Betas to see the scope of changes we intend). The immediate feedback we are looking for is on what will best serve our users in future versions, be it CPU solvers so as to have a fallback, more advanced interfaces, different preconditioners, etc. We're excited to hear what our users think!