Page 1 of 1

multiGPU example

PostPosted: Fri Mar 02, 2012 2:16 am
by phoebe
I saw from README that there is a multiGPU example, but I didn't find it. Where can I find it or does CULA Sparse support multiGPU? Thx!

Re: multiGPU example

PostPosted: Fri Mar 02, 2012 1:14 pm
by john
Hi Phoebe,
The README is inadvertently describing the examples in CULA Dense. We will correct that.

That said, you can download the free version of CULA Dense to check out that example. While CULA Sparse will not use multiple GPUs to solve a single system, the library can be used to issue separate problems simultaneously to different GPUs. The example code will demonstrate how.

Re: multiGPU example

PostPosted: Tue Mar 13, 2012 8:11 am
by suzannepk
Is there a FORTRAN example for multiGPU anywhere? The one mentioned here is in C if I have found the correct example.

Suzanne

Re: multiGPU example

PostPosted: Tue Mar 13, 2012 8:31 am
by john
There is not such an example, but CULA usage in a multithreaded environment is the same as single threaded usage. Just call culaInitialize() - then your routines - then culaShutdown() in each thread.

Re: multiGPU example

PostPosted: Mon Mar 19, 2012 10:56 am
by suzannepk
Thank you John! It is running.

Re: multiGPU example

PostPosted: Wed Mar 21, 2012 2:04 am
by phoebe
Thanks! I'll try.

Re: multiGPU example

PostPosted: Wed Mar 21, 2012 9:08 pm
by phoebe
While CULA Sparse will not use multiple GPUs to solve a single system, the library can be used to issue separate problems simultaneously to different GPUs.

Just as you said that CULA Sparse does not use multiple GPUs to solve a single system. So do you have any plan to update CULA Sparse to make it use multiple GPUs to solve a single system in the future? Thanks´╝ü

Re: multiGPU example

PostPosted: Fri Mar 23, 2012 9:07 am
by john
It's likely, but I'll caution that it's not on the immediate horizon. We're still seeking motivating cases for this need - most of our customer problems to date are more effectively solved by a single GPU than by multiple. (ie the overhead of having the GPUs communicate among themselves outweighs the perf gains by using multiple)