在编写 CUDA 程序时遇到这么一个问题

虚拟机是模拟一个图形设备,这样的话你没有机会接触到真正的 GPU 。这是由于当有多个虚拟机访问同一个设备时虚拟机处理的方式决定的,虚拟机在其中提供了一个中间层来共享真正的硬件设备。

本机显卡 RTX3080 ,在Win10工作站版的 Hyper-v上运行虚拟机 Ubuntu20.04 LTS,经测试是通不了GPU的。

有人提出:Windows上支持虚拟机里使用Cuda的应该只有Hyper-V,但是条件苛刻:

1- 需要使用Windows Server 2016;

2- CPU需要支持SLAT,还有VT-D或者I/OMMU;

3- 企业版GPU,GeForce不行。

目前最新消息是wsl支持Cuda:

技术文档:https://docs.nvidia.com/cuda/wsl-user-guide/index.html

技术博客:https://developer.nvidia.com/blog/announcing-cuda-on-windows-subsystem-for-linux-2/

技术论坛:https://forums.developer.nvidia.com/c/accelerated-computing/cuda/cuda-on-windows-subsystem-for-linux/303

下一次我将根据自己实践,具体讲一下如何安装部署。

1、升级windows10系统到内核20145+

2、升级wsl2,具体见https://aka.ms/wsl2kernel

3、cmd输入:wsl --set-default-version 2,切换到wsl2

4、进入MIcrosoft store 安装ubuntu

5、cmd输入ubuntu即可启动,ubuntu界面下升级:sudo apt update,sudo apt upgrade

6、查看内核版本:uname -r,必须4.19.121+

7、安装cuda-toolkit,选择WSL-Ubuntu,需要注意,安装时选择不安装 CUDA 驱动。详见:https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=runfile_local

deb方式可以用阿里云加速:

wget https://mirrors.aliyun.com/nvidia-cuda/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://mirrors.aliyun.com/nvidia-cuda/ubuntu2004/x86_64/7fa2af80.pub
sudo add-apt-repository "deb https://mirrors.aliyun.com/nvidia-cuda/ubuntu2004/x86_64/ /"
sudo apt-get update
sudo apt-get -y install cuda-11-3
 

安装注意:deb方式,我这里老是有问题,换DNS也不行,翻墙也不行,提示:

Err:11 https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/x86_64  Packages
  404  Not Found [IP: 124.132.138.66 443]
Fetched 216 kB in 4s (57.3 kB/s)
Reading package lists... Done
N: Ignoring file 'cuda-ubuntu2004.pin' in directory '/etc/apt/sources.list.d/' as it has an invalid filename extension
E: Failed to fetch https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/x86_64/by-hash/SHA256/ce4d38aa740e318d2eae04cba08f1322017d162183c8f61f84391bf88020a534  404  Not Found [IP: 124.132.138.66 443]
E: Some index files failed to download. They have been ignored, or old ones used instead.

我选择是runfile[local]方式,遇到“ Failed to verify gcc version. See log at /var/log/cuda-installer.log for details.”,后面加参数如下:

wget https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.19.01_linux.run

sudo sh cuda_11.3.1_465.19.01_linux.run --override

设置环境变量:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.3/lib64

export PATH=$PATH:/usr/local/cuda-11.3/bin

export CUDA_HOME=$CUDA_HOME:/usr/local/cuda-11.3

source ~/.bashrc

测试:

cd /usr/local/cuda-11.3/samples/1_Utilities/deviceQuery

./BlackScholes

效果:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA GeForce RTX 3080"
  CUDA Driver Version / Runtime Version          11.4 / 11.3
  CUDA Capability Major/Minor version number:    8.6
  Total amount of global memory:                 10240 MBytes (10737418240 bytes)
  (068) Multiprocessors, (128) CUDA Cores/MP:    8704 CUDA Cores
  GPU Max Clock rate:                            1710 MHz (1.71 GHz)
  Memory Clock rate:                             9501 Mhz
  Memory Bus Width:                              320-bit
  L2 Cache Size:                                 5242880 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        102400 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.3, NumDevs = 1
Result = PASS

cuDNN安装:

https://developer.nvidia.com/rdp/cudnn-download,下载对应版本

tar -xzvf cudnn-11.3-linux-x64-v8.2.1.32.tgz 

sudo cp cuda/include/cudnn.h /usr/local/cuda-11.3/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda-11.3/lib64
sudo chmod a+r /usr/local/cuda-11.3/include/cudnn.h /usr/local/cuda-11.3/lib64/libcudnn*

验证,通过tensorflow来验证:

import tensorflow as tf

gpu_device_name = tf.test.gpu_device_name()

print(gpu_device_name)

卸载:

/usr/local/cuda-11.3/bin/cuda-uninstaller

GitHub 加速计划 / ws / WSL
17.08 K
797
下载
Issues found on WSL
最近提交(Master分支:25 天前 )
86fa5afa 1 个月前
e899d0b7 1 个月前
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐