can "pinned" main memory pages be used w/ CULA

General CULA Dense (LAPACK & BLAS) support and troubleshooting. Use this forum if you are having a general problem or have encountered a bug.

can "pinned" main memory pages be used w/ CULA

Postby diffent » Mon Dec 07, 2009 9:16 am

Hello all,
there are some notes in the Nvidia CUDA documentation about using "pinned" main memory pages (marked as non-swappable) so that both the CPU and GPU can access the same memory this type of memory supported by CULA? I would guess that since the GPU accesses it, it is treated as device memory after it is pinned. The reason for this would be larger data set handling, where the entire problem cannot be contained within device memory, with some performance penalty. If there is a suggested way to better handle this (i.e. if CULA automatically partitions too-large-to-fit problems into smaller pieces for device execution), that would be good to know, too. Thanks!
Posts: 3
Joined: Sat Nov 07, 2009 7:38 pm

Re:can "pinned" main memory pages be used w/ CULA

Postby john » Wed Dec 16, 2009 11:10 am

This is an interesting approach to out-of-core computing, but is targeted more towards integrated GPUs (such as on laptops) where the memory is physically shared between the CPU and GPU. As such, CULA does not use this type of memory. For our out of core core computations, we will likely take a different path.

Posts: 587
Joined: Thu Jul 23, 2009 2:31 pm

Return to CULA Dense Support

Who is online

Users browsing this forum: No registered users and 1 guest