单机多GPU训练:

如果不使用太复杂的模型,只需调整一下模型跟数据就可以设置为单机多GPU训练:

import os
from torch.nn.parallel import DataParallel
os.environ['CUDA_VISIBLE_DEVICES'] ='2,3'

model = DataParallel(model)                       # 调整模型


 for step, pack in enumerate(tqdm(train_data_loader)):

      # img = pack['img'][0].cuda()
      img = pack['img'].data[0].cuda()          # 调整数据
      gt_sem_seg=pack['gt_semantic_seg'].data[0].cuda()

很多情况下这样就可以了,但是有时候还是会报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0

报错的原因及解决思路

只管的报错原因就是两个数据在不通的GPU上,代码中出现了使它们进行运算地方。按理说不会出现这个原因(pytorch给自动分配好了),但有的模型的有些判断语句或者复制语句,不被python多线程调用更新,就会出现问题,比如这里:

 for i in self.unlabeled_cats:
     assert torch.all(gt_semantic_seg != i), f'Ground-truth leakage! {i}'
 for i in self.clip_unlabeled_cats:
     assert torch.all(gt_semantic_seg != i), f'Ground-truth leakage! {i}'

程序刚开始运行的时候:unlabeled_catsclip_unlabeled_catsgt_semantic_seg 都在cuda0上面,但是在下一个线程调用的时候,gt_semantic_seg 已经是cuda1上面的了,但是unlabeled_catsclip_unlabeled_cats还在是cuda0上。这时候手动更新,将unlabeled_catsclip_unlabeled_cats移动到cuda1上进行判断。比如这里:

 for i in self.unlabeled_cats.to(gt_semantic_seg.device):
      assert torch.all(gt_semantic_seg != i), f'Ground-truth leakage! {i}'
  for i in self.clip_unlabeled_cats.to(gt_semantic_seg.device):
      assert torch.all(gt_semantic_seg != i), f'Ground-truth leakage! {i}'

类似的还有其他地方,比如这里也可这样改:

match_matrix = output[unlabeled_idx]  
   if unlabeled_cats.device==gt_semantic_seg.device:             #判断是否移动device设备
       gt_semantic_seg[unlabeled_idx] = unlabeled_cats[match_matrix.argmax(dim=1)]
   else:    
       gt_semantic_seg[unlabeled_idx] = unlabeled_cats.to(gt_semantic_seg.device)[match_matrix.argmax(dim=1)]
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐