Directly from SC11

by Liana

The entire CULA team is here in Seattle and everyone is pumped up for the first big day of action. Last night, at the opening gala, we were pleased to see familiar faces all around us. It's not an easy showroom to navigate, but we hope our users will find us at booth # 244.  A number of people came by our booth to ask about CULA Sparse, as well as a few scavenger hunters (fun!), and we hope this will be another great show for everyone. Today we will be catching up with our partners to find out what their vision of the SC market is and how we can work together and contribute to their strategies.

By the way, it is TODAY that John Humphrey will be giving his presentation on CULA Sparse and all of the great features added to the CULA Dense library!  We hope you can make it!

What: Exhibitor Forums: Advances in the CULA Linear Algebra Library 
Where: 613/614

Enjoy the show!


CULA Talk at SC11, join us!

by Liana

If you're going to the upcoming SuperComputing conference in Seattle, you'll have the opportunity to attend John Humphrey's presentation on CULA . John's talk this year will focus on the new product features, including Sparse solvers and the zero-effort Link Interface for instant acceleration.  He will show how easy it is to use the link interface with MATLAB, and will also share examples of how users are taking advantage of the new feature.

What: Exhibitor Forums: Advances in the CULA Linear Algebra Library
When: Tuesday, 11/15
Where: 613/614

If you can't make it to John's presentation, stop by our EM Photonics booth (#244) to meet the entire CULA team, myself included.  Finding us may be tricky this year, so you may want to check out the exhibitor map online first.  Hope to see you there!



by Kyle

EM Photonics and a few members of the CULA team will be attending SPIE's Defense, Security, & Sensing (DSS) conference next week in Orlando, Florida.  In addition to a booth in the exhibit hall, we'll be presenting a number of papers including one detailing the latest work involving our sparse linear algebra solvers.  If you are attending the conference, please stop by our booth or visit one of our talks!

Here is the abstract for our sparse linear algebra talk:

The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio.  High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU.  Our work is on a GPU accelerated implementation of sparse linear algebra routines.  We present results from both direct and iterative sparse system solvers.

The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance.  Some constructs from linear algebra map extremely well to the GPU and others map poorly.  CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments.  Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results.  In many cases, this is accomplished by allowing each platform to do the work it performs most naturally.  For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

We'll also be presenting and demonstrating work from our image processing and fluid dynamics teams.