Hi guys,
I am a researcher working in Particle Physics and after going deep into ML I decided to switch to GPU processing and test my algorithms. I realize that we have huge GPU farms already set-up at our disposal but I want to do this on my own laptop and be able to test my own core code.
First, my device:
- i7-2670QM, 16 GB RAM 1333 MHz
- GTX 560M 3 GB VRAM (2.1 compute capability, driver 391.35 - latest driver that can be installed for the 560M, so this is a limiting factor!)
Because of my actual hardware (i.e. driver version), I have to use CUDA Toolkit 9.1 (to conform with table 1 from here). This in response has the effect of requiring Visual Studio 2017 because it doesn’t want any _MSC_VER higher than 1911. However, VS2017 has reached a version (15.9) that has a higher _MSC_VER than 1911 so I had to download VS 2017 15.3 (_MSC_VER = 1911) to make it work. All this “know how” was gained via trial and error (many installs and uninstalls because I, of course, started by trying CUDA Toolkit 11!). As a smaller step I had to install the Windows SDK 10.0.15063 because that’s what the toolkit expected. Happy to say that everything is finally compiling successfully (all Samples and even a new project).
As runtime goes, deviceQuery gives the nice output of:
deviceQuery.exe Starting…
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: “GeForce GTX 560M”
CUDA Driver Version / Runtime Version 9.1 / 9.1
CUDA Capability Major/Minor version number: 2.1
Total amount of global memory: 3072 MBytes (3221225472 bytes)
MapSMtoCores for SM 2.1 is undefined. Default to use 64 Cores/SM
MapSMtoCores for SM 2.1 is undefined. Default to use 64 Cores/SM
( 4) Multiprocessors, ( 64) CUDA Cores/MP: 256 CUDA Cores
GPU Max Clock rate: 1550 MHz (1.55 GHz)
Memory Clock rate: 1250 Mhz
Memory Bus Width: 192-bit
L2 Cache Size: 393216 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.1, CUDA Runtime Version = 9.1, NumDevs = 1
Result = PASS
Even bandwidthTest shows that “the pipes” are opened:
[CUDA Bandwidth Test] - Starting…
Running on…Device 0: GeForce GTX 560M
Quick ModeHost to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 6440.2Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 6340.1Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 44375.2Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
However, all apps with actual processing are throwing something in the no kernel is available for execution on the device region and I traced it down to the fact that 9.1 doesn’t know about compute_20,sm_21. So because 9.1 doesn’t know about the 2.1 compute capability, the addKernel method(s) try to add a processing kernel that cannot be handled by my GPU. For example, the vectorAdd sample gives:
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Failed to launch vectorAdd kernel (error code no kernel image is available for execution on the device)!
Of course, editing the Code Generation in the project properties has the effect of breaking the compilation. Reading through I found out that 2.1 is enabled for CUDA Toolkit 8 but that will definitely have the effect of more uninstalls and SDK/VS version hunting, not to mention that driver version 391.35 is not meant for version 8 (when I tried v11 on 391.35, even deviceQuery failed).
So are there kernel images that can be added later to a certain CUDA toolkit (i.e. 2.1 in toolkit 9.1)? Or what is the go-around for this problem w/o down-versioning…everything!