20May/11Off

CULA R12 Highlight: Link Interface

by Dan

Today we’re very excited to talk about a key feature of CULA R12 that we’ve titled the “Link Interface”. The Link Interface is a link-compatible way of using CULA in your existing programs. That means that with no modifications to your code, you can use CULA’s GPU accelerated routines in your programs simply by changing the linking settings of your application. This is a true way to “swap” out your current package for a GPU accelerated one.

Within the Link Interface we’ve put a lot of work into ensuring that we support all of the needs of your program. Here is just a sampling of these capabilities:

  • GPU acceleration with a single code path. The link interface intercepts all LAPACK and BLAS calls and then dispatches them appropriately. If an accelerated version of a called function is available and the parameters are a sensible combination for GPU acceleration, then the CULA version is called. If not, or the user does not have a GPU, the function will run on the CPU.
  • All functions are available. The link interface provides definitions for all of the functions in LAPACK and BLAS. You don’t have to know which are GPU accelerated and which are not to use the interface because the link interface handles that for you (although we have options to show you if you do want to know).
  • Choose which functions are GPU accelerated and which are not. The link interface supports a configuration file with which you can override our defaults for determining which functions you would like to issue to the GPU and which to the CPU.
  • Accelerated level 3 BLAS is supported. In addition to LAPACK, our link interface provides GPU accelerated definitions for functions, such as matrix multiply, that can benefit from GPU acceleration.
  • Coexists peacefully with other packages. If you would like to use CULA for one part of your application but rely on other packages for different functionality, rest assured that CULA can coexist with other packages like MKL or ACML.

Having link compatibility is a stepping stone towards some amazing applications. For example, using our link interface, with nearly zero-effort you can use GPU accelerated functions in Matlab, a capability we’ll discuss in a future post.

Our main goal with this feature is to help those who have not tried adding GPU acceleration to their codes to do so with almost no work. As soon as CUDA 4 is released, look forward to a full announcement of our R12 release and all of the new capabilities it delivers!