PyTorch v1.3.1 Release Notes
Release Date: 2019-11-07 // over 4 years ago-
๐ Significant Fixes
๐ Type Promotion: fixed a bug where type promotion, combined with non-contiguous tensors could compute incorrect results. (28253)
Version 1.3.0 Version 1.3.1 >>> a = torch.tensor([[True, True], [<span class="pl-c1">False</span>, <span class="pl-c1">True</span>]])
# get a non-contiguous tensor >>> a_transpose = a.t() # type promote by comparing across dtypes (bool -> long) >>> a_transpose == 0 # POTENTIALLY INCORRECT VALUES | >>> a = torch.tensor([[True, True], [False, True]]) # get a non-contiguous tensor >>> a_transpose = a.t() # type promote by comparing across dtypes (bool -> long) >>> a_transpose == 0 tensor([[False, True], [False, False]]) |
๐ Type Promotion / Indexing: Fixed a Bug that Allowed Mixed-Dtype Indexing and assignment could lead to incorrect results. Mixed dtype operations of this form are currently disabled, as they were in 1.2. (28231)
Version 1.3.0 Version 1.3.1 >>> a = torch.ones(5, 2, dtype=torch.float) >>> b = torch.zeros(5, dtype=torch.long) >>> a[:, [1]] = b.unsqueeze(-1) >>> a # POTENTIALLY INCORRECT VALUES | >>> a = torch.ones(5, 2, dtype=torch.float) >>> b = torch.zeros(5, dtype=torch.long) >>> a[:, [1]] = b.unsqueeze(-1) RuntimeError: expected dtype Float but got dtype Long |
๐ torch.where(condition, x, y): fixed a bug on CPU where incorrect results could be returned if
x
andy
were of different dtypes. Mixed dtype operations of this form are currently disabled, as they were in version 1.2. (29078)Version 1.3.0 Version 1.3.1 >>> x = torch.randn(2, 3) >>> y = torch.randint(0, 10, (2, 3)) >>> torch.where(x < 0, x, y) tensor(...) # POTENTIALLY INCORRECT VALUES | >>> x = torch.randn(2, 3) >>> y = torch.randint(0, 10, (2, 3)) >>> torch.where(x < 0, x, y) RuntimeError: expected scalar type Float but found Long |
๐ Other Fixes
- ๐
torch.argmax
: fix regression on CUDA that disabled support fortorch.float16
inputs. (28915) - NamedTensor: fix Python refcounting bug with
Tensor.names
. (28922) - ๐ Quantization: support
deepcopy
for quantized tensors. (28612) - ๐ Quantization: support
nn.quantized.ReLU
withinplace=True
. (28710) - ๐ Documentation:
torch.lgamma
andtorch.polygamma
are now documented. (28964)
- ๐