PyTorch v1.3.1 Release Notes

Release Date: 2019-11-07 // over 4 years ago
  • ๐Ÿ›  Significant Fixes

    ๐Ÿ›  Type Promotion: fixed a bug where type promotion, combined with non-contiguous tensors could compute incorrect results. (28253)

    Version 1.3.0 Version 1.3.1
    >>> a = torch.tensor([[True, True],
                      [<span class="pl-c1">False</span>, <span class="pl-c1">True</span>]])
    

    # get a non-contiguous tensor >>> a_transpose = a.t() # type promote by comparing across dtypes (bool -> long) >>> a_transpose == 0 # POTENTIALLY INCORRECT VALUES | >>> a = torch.tensor([[True, True], [False, True]]) # get a non-contiguous tensor >>> a_transpose = a.t() # type promote by comparing across dtypes (bool -> long) >>> a_transpose == 0 tensor([[False, True], [False, False]]) |

    ๐Ÿ›  Type Promotion / Indexing: Fixed a Bug that Allowed Mixed-Dtype Indexing and assignment could lead to incorrect results. Mixed dtype operations of this form are currently disabled, as they were in 1.2. (28231)

    Version 1.3.0 Version 1.3.1
    >>> a = torch.ones(5, 2, dtype=torch.float)

    >>> b = torch.zeros(5, dtype=torch.long) >>> a[:, [1]] = b.unsqueeze(-1) >>> a # POTENTIALLY INCORRECT VALUES | >>> a = torch.ones(5, 2, dtype=torch.float) >>> b = torch.zeros(5, dtype=torch.long) >>> a[:, [1]] = b.unsqueeze(-1) RuntimeError: expected dtype Float but got dtype Long |

    ๐Ÿ›  torch.where(condition, x, y): fixed a bug on CPU where incorrect results could be returned if x and y were of different dtypes. Mixed dtype operations of this form are currently disabled, as they were in version 1.2. (29078)

    Version 1.3.0 Version 1.3.1
    >>> x = torch.randn(2, 3)

    >>> y = torch.randint(0, 10, (2, 3)) >>> torch.where(x < 0, x, y) tensor(...) # POTENTIALLY INCORRECT VALUES | >>> x = torch.randn(2, 3) >>> y = torch.randint(0, 10, (2, 3)) >>> torch.where(x < 0, x, y) RuntimeError: expected scalar type Float but found Long |

    ๐Ÿ›  Other Fixes

    • ๐Ÿ‘ torch.argmax: fix regression on CUDA that disabled support for torch.float16 inputs. (28915)
    • NamedTensor: fix Python refcounting bug with Tensor.names. (28922)
    • ๐Ÿ‘ Quantization: support deepcopy for quantized tensors. (28612)
    • ๐Ÿ‘ Quantization: support nn.quantized.ReLU with inplace=True. (28710)
    • ๐Ÿ“š Documentation: torch.lgamma and torch.polygamma are now documented. (28964)