Module 'torch.distributed' has no attribute 'ReduceOp'

@qiangqiangsir the PyTorch 1.11 wheel was the last one to be built with USE_DISTRIBUTED:

PyTorch v1.11.0

If you require a newer version of PyTorch with distributed enabled, please see this thread for instructions on building PyTorch from source:

Or perhaps it’s possible to disable distributed mode in the mmpretrain library you are using?

1 Like