All Versions
30
Latest Version
Avg Release Cycle
37 days
Latest Release
913 days ago

Changelog History
Page 3

  • v2.1.0-rc2 Changes

    December 23, 2019

    πŸš€ Release 2.1.0

    πŸš€ TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.

    Major Features and Improvements

    • 🐧 The tensorflow pip package now includes GPU support by default (same as tensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.
    • 🏁 Windows users: Officially-released tensorflow Pip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new /d2ReducedOptimizeHugeFunctions compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website here.
      • This does not change the minimum required version for building TensorFlow from source on Windows, but builds enabling EIGEN_STRONG_INLINE can take over 48 hours to compile without this flag. Refer to configure.py for more information about EIGEN_STRONG_INLINE and /d2ReducedOptimizeHugeFunctions.
      • If either of the required DLLs, msvcp140.dll (old) or msvcp140_1.dll (new), are missing on your machine, import tensorflow will print a warning message.
    • πŸ“¦ The tensorflow pip package is built with CUDA 10.1 and cuDNN 7.6.
    • tf.keras
      • Experimental support for mixed precision is available on GPUs and Cloud TPUs. See usage guide.
      • Introduced the TextVectorization layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example.
      • Keras .compile .fit .evaluate and .predict are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope.
      • Experimental support for Keras .compile, .fit, .evaluate, and .predict is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models).
      • Automatic outside compilation is now enabled for Cloud TPUs. This allows tf.summary to be used more conveniently with Cloud TPUs.
      • Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
      • Support for .fit, .evaluate, .predict on TPU using numpy data, in addition to tf.data.Dataset.
      • Keras reference implementations for many popular models are available in the TensorFlow Model Garden.
    • tf.data
      • Changes rebatching for tf.data datasets + DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.
      • tf.data.Dataset now supports automatic data distribution and sharding in distributed environments, including on TPU pods.
      • Distribution policies for tf.data.Dataset can now be tuned with 1. tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA) 2. tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)
    • tf.debugging
      • Add tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics() to help debugging the root causes of issues involving infinities and NaNs.
    • tf.distribute
      • Custom training loop support on TPUs and TPU pods is avaiable through strategy.experimental_distribute_dataset, strategy.experimental_distribute_datasets_from_function, strategy.experimental_run_v2, strategy.reduce.
      • Support for a global distribution strategy through tf.distribute.experimental_set_strategy(), in addition to strategy.scope().
    • TensorRT
      • TensorRT 6.0 is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as tf.experimental.tensorrt.Converter.
    • Environment variable TF_DETERMINISTIC_OPS has been added. When set to "true" or "1", this environment variable makes tf.nn.bias_add operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. Setting TF_DETERMINISTIC_OPS to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.

    πŸ’₯ Breaking Changes

    • Deletes Operation.traceback_with_start_lines for which we know of no usages.
    • Removed id from tf.Tensor. __repr__ () as id is not useful other than internal debugging.
    • Some tf.assert_* methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).
    • The following APIs are not longer experimental: tf.config.list_logical_devices, tf.config.list_physical_devices, tf.config.get_visible_devices, tf.config.set_visible_devices, tf.config.get_logical_device_configuration, tf.config.set_logical_device_configuration.
    • tf.config.experimentalVirtualDeviceConfiguration has been renamed to tf.config.LogicalDeviceConfiguration.
    • tf.config.experimental_list_devices has been removed, please use
      tf.config.list_logical_devices.

    πŸ› Bug Fixes and Other Changes

    • tf.data
      • Fixes concurrency issue with tf.data.experimental.parallel_interleave with sloppy=True.
      • Add tf.data.experimental.dense_to_ragged_batch().
      • Extend tf.data parsing ops to support RaggedTensors.
    • tf.distribute
      • Fix issue where GRU would crash or give incorrect output when a tf.distribute.Strategy was used.
    • tf.estimator
      • Added option in tf.estimator.CheckpointSaverHook to not save the GraphDef.
      • Moving the checkpoint reader from swig to pybind11.
    • tf.keras
      • Export depthwise_conv2d in tf.keras.backend.
      • In Keras Layers and Models, Variables in trainable_weights, non_trainable_weights, and weights are explicitly deduplicated.
      • Keras model.load_weights now accepts skip_mismatch as an argument. This was available in external Keras, and has now been copied over to tf.keras.
      • Fix the input shape caching behavior of Keras convolutional layers.
      • Model.fit_generator, Model.evaluate_generator, Model.predict_generator, Model.train_on_batch, Model.test_on_batch, and Model.predict_on_batch methods now respect the run_eagerly property, and will correctly run using tf.function by default. Note that Model.fit_generator, Model.evaluate_generator, and Model.predict_generator are deprecated endpoints. They are subsumed by Model.fit, Model.evaluate, and Model.predict which now support generators and Sequences.
    • tf.lite
      • Legalization for NMS ops in TFLite.
      • add narrow_range and axis to quantize_v2 and dequantize ops.
      • Added support for FusedBatchNormV3 in converter.
      • Add an errno-like field to NNAPI delegate for detecting NNAPI errors for fallback behaviour.
      • Refactors NNAPI Delegate to support detailed reason why an operation is not accelerated.
      • Converts hardswish subgraphs into atomic ops.
    • Other
      • Critical stability updates for TPUs, especially in cases where the XLA compiler produces compilation errors.
      • TPUs can now be re-initialized multiple times, using tf.tpu.experimental.initialize_tpu_system.
      • Add RaggedTensor.merge_dims().
      • Added new uniform_row_length row-partitioning tensor to RaggedTensor.
      • Add shape arg to RaggedTensor.to_tensor; Improve speed of RaggedTensor.to_tensor.
      • tf.io.parse_sequence_example and tf.io.parse_single_sequence_example now support ragged features.
      • Fix while_v2 with variables in custom gradient.
      • Support taking gradients of V2 tf.cond and tf.while_loop using LookupTable.
      • Fix bug where vectorized_map failed on inputs with unknown static shape.
      • Add preliminary support for sparse CSR matrices.
      • Tensor equality with None now behaves as expected.
      • Make calls to tf.function(f)(), tf.function(f).get_concrete_function and tf.function(f).get_initialization_function thread-safe.
      • Extend tf.identity to work with CompositeTensors (such as SparseTensor)
      • Added more dtypes and zero-sized inputs to Einsum Op and improved its performance
      • Enable multi-worker NCCL all-reduce inside functions executing eagerly.
      • Added complex128 support to RFFT, RFFT2D, RFFT3D, IRFFT, IRFFT2D, and IRFFT3D.
      • Add pfor converter for SelfAdjointEigV2.
      • Add tf.math.ndtri and tf.math.erfinv.
      • Add tf.config.experimental.enable_mlir_bridge to allow using MLIR compiler bridge in eager model.
      • Added support for MatrixSolve on Cloud TPU / XLA.
      • Added tf.autodiff.ForwardAccumulator for forward-mode autodiff
      • Add LinearOperatorPermutation.
      • A few performance optimizations on tf.reduce_logsumexp.
      • Added multilabel handling to AUC metric
      • Optimization on zeros_like.
      • Dimension constructor now requires None or types with an __index__ method.
      • Add tf.random.uniform microbenchmark.
      • Use _protogen suffix for proto library targets instead of _cc_protogen suffix.
      • Moving the checkpoint reader from swig to pybind11.
      • tf.device & MirroredStrategy now supports passing in a tf.config.LogicalDevice
      • If you're building Tensorflow from source, consider using bazelisk to automatically download and use the correct Bazel version. Bazelisk reads the .bazelversion file at the root of the project directory.

    Thanks to our Contributors

    πŸš€ This release contains contributions from many people at Google, as well as:

    8bitmp3, Aaron Ma, AbdΓΌLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, ηŽ‹ζŒ―εŽ (Zhenhua Wang), ιŸ©θ‘£, 이쀑건 Isaac Lee

  • v2.1.0-rc1 Changes

    December 11, 2019

    πŸš€ Release 2.1.0-rc1

    πŸš€ TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.

    Major Features and Improvements

    • 🐧 The tensorflow pip package now includes GPU support by default (same as tensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.
    • πŸ“¦ The tensorflow pip package is built with CUDA 10.1 and cuDNN 7.6.
    • tf.keras
      • Model.fit_generator, Model.evaluate_generator, Model.predict_generator, Model.train_on_batch, Model.test_on_batch, and Model.predict_on_batch methods now respect the run_eagerly property, and will correctly run using tf.function by default.
      • Model.fit_generator, Model.evaluate_generator, and Model.predict_generator are deprecated endpoints. They are subsumed by Model.fit, Model.evaluate, and Model.predict which now support generators and Sequences.
      • Keras .compile .fit .evaluate and .predict are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope.
      • Keras model.load_weights now accepts skip_mismatch as an argument. This was available in external Keras, and has now been copied over to tf.keras.
      • Introduced the TextVectorization layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example.
      • Experimental support for Keras .compile, .fit, .evaluate, and .predict is available for Cloud TPU Pods.
      • Automatic outside compilation is now enabled for Cloud TPUs. This allows tf.summary to be used more conveniently with Cloud TPUs.
      • Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
      • Experimental support for mixed precision is available on GPUs and Cloud TPUs.
      • Keras reference implementations for many popular models are available in the TensorFlow Model Garden.
    • tf.data
      • Changes rebatching for tf.data datasets + distribution strategies for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.
    • TensorRT
      • TensorRT 6.0 is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as tf.experimental.tensorrt.Converter.
    • Environment variable TF_DETERMINISTIC_OPS has been added. When set to "true" or "1", this environment variable makes tf.nn.bias_add operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. Setting TF_DETERMINISTIC_OPS to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.

    πŸ’₯ Breaking Changes

    • Deletes Operation.traceback_with_start_lines for which we know of no usages.
    • Removed id from tf.Tensor. __repr__ () as id is not useful other than internal debugging.
    • Some tf.assert_* methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).
    • The following APIs are no longer experimental: tf.config.list_logical_devices, tf.config.list_physical_devices, tf.config.get_visible_devices, tf.config.set_visible_devices, tf.config.get_logical_device_configuration, tf.config.set_logical_device_configuration.
    • tf.config.experimentalVirtualDeviceConfiguration has been renamed to tf.config.LogicalDeviceConfiguration.
    • tf.config.experimental_list_devices has been removed, please use
      tf.config.list_logical_devices.

    πŸ› Bug Fixes and Other Changes

    • tf.data
      • Fixes concurrency issue with tf.data.experimental.parallel_interleave with sloppy=True.
      • Add tf.data.experimental.dense_to_ragged_batch().
      • Extend tf.data parsing ops to support RaggedTensors.
    • tf.distribute
      • Fix issue where GRU would crash or give incorrect output when a tf.distribute.Strategy was used.
    • tf.estimator
      • Added option in tf.estimator.CheckpointSaverHook to not save the GraphDef.
      • Moving the checkpoint reader from swig to pybind11.
    • tf.keras
      • Export depthwise_conv2d in tf.keras.backend.
      • In Keras Layers and Models, Variables in trainable_weights, non_trainable_weights, and weights are explicitly deduplicated.
      • Fix the incorrect stateful behavior of Keras convolutional layers.
    • tf.lite
      • Legalization for NMS ops in TFLite.
      • add narrow_range and axis to quantize_v2 and dequantize ops.
      • Added support for FusedBatchNormV3 in converter.
      • Add an errno-like field to NNAPI delegate for detecting NNAPI errors for fallback behaviour.
      • Refactors NNAPI Delegate to support detailed reason why an operation is not accelerated.
      • Converts hardswish subgraphs into atomic ops.
    • Other
      • Add RaggedTensor.merge_dims().
      • Added new uniform_row_length row-partitioning tensor to RaggedTensor.
      • Add shape arg to RaggedTensor.to_tensor; Improve speed of RaggedTensor.to_tensor.
      • tf.io.parse_sequence_example and tf.io.parse_single_sequence_example now support ragged features.
      • Fix while_v2 with variables in custom gradient.
      • Support taking gradients of V2 tf.cond and tf.while_loop using LookupTable.
      • Fix bug where vectorized_map failed on inputs with unknown static shape.
      • Add preliminary support for sparse CSR matrices.
      • Tensor equality with None now behaves as expected.
      • Make calls to tf.function(f)(), tf.function(f).get_concrete_function and tf.function(f).get_initialization_function thread-safe.
      • Add tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics() to facilitate debugging of numeric instability (Infinitys and NaNs) under eager mode and tf.functions.
      • Extend tf.identity to work with CompositeTensors (such as SparseTensor)
      • Added more dtypes and zero-sized inputs to Einsum Op and improved its performance
      • Enable multi-worker NCCL all-reduce inside functions executing eagerly.
      • Added complex128 support to RFFT, RFFT2D, RFFT3D, IRFFT, IRFFT2D, and IRFFT3D.
      • Add pfor converter for SelfAdjointEigV2.
      • Add tf.math.ndtri and tf.math.erfinv.
      • Add tf.config.experimental.enable_mlir_bridge to allow using MLIR compiler bridge in eager model.
      • Added support for MatrixSolve on Cloud TPU / XLA.
      • Added tf.autodiff.ForwardAccumulator for forward-mode autodiff
      • Add LinearOperatorPermutation.
      • A few performance optimizations on tf.reduce_logsumexp.
      • Added multilabel handling to AUC metric
      • Optimization on zeros_like.
      • Dimension constructor now requires None or types with an __index__ method.
      • Add tf.random.uniform microbenchmark.
      • Use _protogen suffix for proto library targets instead of _cc_protogen suffix.
      • Moving the checkpoint reader from swig to pybind11.
      • tf.device & MirroredStrategy now supports passing in a tf.config.LogicalDevice
      • If you're building Tensorflow from source, consider using bazelisk to automatically download and use the correct Bazel version. Bazelisk reads the .bazelversion file at the root of the project directory.

    Thanks to our Contributors

    πŸš€ This release contains contributions from many people at Google, as well as:

    8bitmp3, Aaron Ma, AbdΓΌLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, ηŽ‹ζŒ―εŽ (Zhenhua Wang), ιŸ©θ‘£, 이쀑건 Isaac Lee

  • v2.1.0-rc0 Changes

    November 27, 2019

    πŸš€ Release 2.1.0

    πŸš€ TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.

    Major Features and Improvements

    • 🐧 The tensorflow pip package now includes GPU support by default (same as tensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.
    • tf.keras
      • Model.fit_generator, Model.evaluate_generator, Model.predict_generator, Model.train_on_batch, Model.test_on_batch, and Model.predict_on_batch methods now respect the run_eagerly property, and will correctly run using tf.function by default.
      • Model.fit_generator, Model.evaluate_generator, and Model.predict_generator are deprecated endpoints. They are subsumed by Model.fit, Model.evaluate, and Model.predict which now support generators and Sequences.
      • Keras .compile .fit .evaluate and .predict are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope.
      • Keras model.load_weights now accepts skip_mismatch as an argument. This was available in external Keras, and has now been copied over to tf.keras.
      • Introduced the TextVectorization layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example.
      • Experimental support for Keras .compile, .fit, .evaluate, and .predict is available for Cloud TPU Pods.
      • Automatic outside compilation is now enabled for Cloud TPUs. This allows tf.summary to be used more conveniently with Cloud TPUs.
      • Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
      • Experimental support for mixed precision is available on GPUs and Cloud TPUs.
      • Keras reference implementations for many popular models are available in the TensorFlow Model Garden.
    • tf.data
      • Changes rebatching for tf.data datasets + distribution strategies for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.
    • TensorRT
      • TensorRT 6.0 is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as tf.experimental.tensorrt.Converter.

    Known issues

    🏁 Because of issues with building on windows, we turned off eigen strong inlining for the Windows builds. Windows binaries are expected to be slightly slower until the build issues are resolved.

    πŸ’₯ Breaking Changes

    • Deletes Operation.traceback_with_start_lines for which we know of no usages.
    • Removed id from tf.Tensor. __repr__ () as id is not useful other than internal debugging.
    • Some tf.assert_* methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).
    • The following APIs are not longer experimental: tf.config.list_logical_devices, tf.config.list_physical_devices, tf.config.get_visible_devices, tf.config.set_visible_devices, tf.config.get_logical_device_configuration, tf.config.set_logical_device_configuration.
    • tf.config.experimentalVirtualDeviceConfiguration has been renamed to tf.config.LogicalDeviceConfiguration.
    • tf.config.experimental_list_devices has been removed, please use
      tf.config.list_logical_devices.

    πŸ› Bug Fixes and Other Changes

    • tf.data
      • Fixes concurrency issue with tf.data.experimental.parallel_interleave with sloppy=True.
      • Add tf.data.experimental.dense_to_ragged_batch().
      • Extend tf.data parsing ops to support RaggedTensors.
    • tf.distribute
      • Fix issue where GRU would crash or give incorrect output when a tf.distribute.Strategy was used.
    • tf.estimator
      • Added option in tf.estimator.CheckpointSaverHook to not save the GraphDef.
    • tf.keras
      • Export depthwise_conv2d in tf.keras.backend.
      • In Keras Layers and Models, Variables in trainable_weights, non_trainable_weights, and weights are explicitly deduplicated.
      • Fix the incorrect stateful behavior of Keras convolutional layers.
    • tf.lite
      • Legalization for NMS ops in TFLite.
      • add narrow_range and axis to quantize_v2 and dequantize ops.
      • Added support for FusedBatchNormV3 in converter.
      • Add an errno-like field to NNAPI delegate for detecting NNAPI errors for fallback behaviour.
      • Refactors NNAPI Delegate to support detailed reason why an operation is not accelerated.
      • Converts hardswish subgraphs into atomic ops.
    • Other
      • Add RaggedTensor.merge_dims().
      • Added new uniform_row_length row-partitioning tensor to RaggedTensor.
      • Add shape arg to RaggedTensor.to_tensor; Improve speed of RaggedTensor.to_tensor.
      • tf.io.parse_sequence_example and tf.io.parse_single_sequence_example now support ragged features.
      • Fix while_v2 with variables in custom gradient.
      • Support taking gradients of V2 tf.cond and tf.while_loop using LookupTable.
      • Fix bug where vectorized_map failed on inputs with unknown static shape.
      • Add preliminary support for sparse CSR matrices.
      • Tensor equality with None now behaves as expected.
      • Make calls to tf.function(f)(), tf.function(f).get_concrete_function and tf.function(f).get_initialization_function thread-safe.
      • Add tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics() to facilitate debugging of numeric instability (Infinitys and NaNs) under eager mode and tf.functions.
      • Extend tf.identity to work with CompositeTensors (such as SparseTensor)
      • Added more dtypes and zero-sized inputs to Einsum Op and improved its performance
      • Enable multi-worker NCCL all-reduce inside functions executing eagerly.
      • Added complex128 support to RFFT, RFFT2D, RFFT3D, IRFFT, IRFFT2D, and IRFFT3D.
      • Add pfor converter for SelfAdjointEigV2.
      • Add tf.math.ndtri and tf.math.erfinv.
      • Add tf.config.experimental.enable_mlir_bridge to allow using MLIR compiler bridge in eager model.
      • Added support for MatrixSolve on Cloud TPU / XLA.
      • Added tf.autodiff.ForwardAccumulator for forward-mode autodiff
      • Add LinearOperatorPermutation.
      • A few performance optimizations on tf.reduce_logsumexp.
      • Added multilabel handling to AUC metric
      • Optimization on zeros_like.
      • Dimension constructor now requires None or types with an __index__ method.
      • Add tf.random.uniform microbenchmark.
      • Use _protogen suffix for proto library targets instead of _cc_protogen suffix.
      • Moving the checkpoint reader from swig to pybind11.
      • tf.device & MirroredStrategy now supports passing in a tf.config.LogicalDevice
      • If you're building Tensorflow from source, consider using bazelisk to automatically download and use the correct Bazel version. Bazelisk reads the .bazelversion file at the root of the project directory.

    Thanks to our Contributors

    πŸš€ This release contains contributions from many people at Google, as well as:

    8bitmp3, Aaron Ma, AbdΓΌLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, ηŽ‹ζŒ―εŽ (Zhenhua Wang), ιŸ©θ‘£, 이쀑건 Isaac Lee

  • v2.0.3 Changes

    September 24, 2020

    πŸš€ Release 2.0.3

    πŸ› Bug Fixes and Other Changes

  • v2.0.2 Changes

    May 18, 2020

    πŸ› Bug Fixes and Other Changes

  • v2.0.1 Changes

    January 25, 2020

    πŸš€ Release 2.0.1

    πŸ› Bug Fixes and Other Changes

  • v1.15.4 Changes

    September 24, 2020

    πŸš€ Release 1.15.4

    πŸ› Bug Fixes and Other Changes

  • v1.15.3 Changes

    May 18, 2020

    πŸ› Bug Fixes and Other Changes

  • v1.15.2 Changes

    January 26, 2020

    πŸš€ Release 1.15.2

    πŸš€ Note that this release no longer has a single pip package for GPU and CPU. Please see #36347 for history and details

    πŸ› Bug Fixes and Other Changes

  • v1.15.0 Changes

    October 16, 2019

    πŸš€ Release 1.15.0

    πŸš€ This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.

    Major Features and Improvements

    • 🐧 As announced, tensorflow pip package will by default include GPU support (same as tensorflow-gpu now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs. tensorflow-gpu will still be available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.
    • TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.
      This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.
    • πŸ‘ EagerTensor now supports numpy buffer interface for tensors.
    • Add toggles tf.enable_control_flow_v2() and tf.disable_control_flow_v2() for enabling/disabling v2 control flow.
    • Enable v2 control flow as part of tf.enable_v2_behavior() and TF2_BEHAVIOR=1.
    • AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside tf.function-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIS.
    • Adds enable_tensor_equality(), which switches the behavior such that:
      • Tensors are no longer hashable.
      • Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0.
    • Auto Mixed-Precision graph optimizer simplifies converting models to float16 for acceleration on Volta and Turing Tensor Cores. This feature can be enabled by wrapping an optimizer class with tf.train.experimental.enable_mixed_precision_graph_rewrite().
    • Add environment variable TF_CUDNN_DETERMINISTIC. Setting to "true" or "1" forces the selection of deterministic cuDNN convolution and max-pooling algorithms. When this is enabled, the algorithm selection procedure itself is also deterministic.
    • TensorRT
      • Migrate TensorRT conversion sources from contrib to compiler directory in preparation for TF 2.0.
      • Add additional, user friendly TrtGraphConverter API for TensorRT conversion.
      • Expand support for TensorFlow operators in TensorRT conversion (e.g.
        Gather, Slice, Pack, Unpack, ArgMin, ArgMax,DepthSpaceShuffle).
      • Support TensorFlow operator CombinedNonMaxSuppression in TensorRT conversion which
        significantly accelerates object detection models.

    πŸ’₯ Breaking Changes

    • πŸ“¦ Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.
    • TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.
    • πŸ—„ Deprecated the use of constraint= and .constraint with ResourceVariable.
    • tf.keras:
      • OMP_NUM_THREADS is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading APIs.
      • tf.keras.model.save_model and model.save now defaults to saving a TensorFlow SavedModel.
      • keras.backend.resize_images (and consequently, keras.layers.Upsampling2D) behavior has changed, a bug in the resizing implementation was fixed.
      • Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with tf.keras.backend.set_floatx('float64'), or pass dtype='float64' to each of the Layer constructors. See tf.keras.layers.Layer for more information.
      • Some tf.assert_* methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).

    πŸ› Bug Fixes and Other Changes

    • tf.estimator:
      • tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with model.load_weights.
      • Fix tests in canned estimators.
      • Expose Head as public API.
      • Fixes critical bugs that help with DenseFeatures usability in TF2
    • tf.data:
      • Promoting unbatch from experimental to core API.
      • Adding support for datasets as inputs to from_tensors and from_tensor_slices and batching and unbatching of nested datasets.
    • tf.keras:
      • tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with model.load_weights.
      • Saving a Keras Model using tf.saved_model.save now saves the list of variables, trainable variables, regularization losses, and the call function.
      • Deprecated tf.keras.experimental.export_saved_model and tf.keras.experimental.function. Please use tf.keras.models.save_model(..., save_format='tf') and tf.keras.models.load_model instead.
      • Add an implementation=3 mode for tf.keras.layers.LocallyConnected2D and tf.keras.layers.LocallyConnected1D layers using tf.SparseTensor to store weights, allowing a dramatic speedup for large sparse models.
      • Enable the Keras compile API experimental_run_tf_function flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to Dataset. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless run_eagerly=True is set in compile.
      • Raise error if batch_size argument is used when input is dataset/generator/keras sequence.
    • tf.lite
      • Add GATHER support to NN API delegate.
      • tflite object detection script has a debug mode.
      • Add delegate support for QUANTIZE.
      • Added evaluation script for COCO minival.
      • Add delegate support for QUANTIZED_16BIT_LSTM.
      • Converts hardswish subgraphs into atomic ops.
    • βž• Add support for defaulting the value of cycle_length argument of tf.data.Dataset.interleave to the number of schedulable CPU cores.
    • parallel_for: Add converter for MatrixDiag.
    • βž• Add narrow_range attribute to QuantizeAndDequantizeV2 and V3.
    • Added new op: tf.strings.unsorted_segment_join.
    • βž• Add HW acceleration support for topK_v2.
    • βž• Add new TypeSpec classes.
    • ⚑️ CloudBigtable version updated to v0.10.0.
    • πŸ”¦ Expose Head as public API.
    • ⚑️ Update docstring for gather to properly describe the non-empty batch_dims case.
    • βž• Added tf.sparse.from_dense utility function.
    • πŸ‘Œ Improved ragged tensor support in TensorFlowTestCase.
    • πŸ”§ Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.
    • ResizeInputTensor now works for all delegates.
    • βœ… Add EXPAND_DIMS support to NN API delegate TEST: expand_dims_test
    • tf.cond emits a StatelessIf op if the branch functions are stateless and do not touch any resources.
    • tf.cond, tf.while and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.
    • tf.while_loop emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.
    • πŸ”¨ Refactors code in Quant8 LSTM support to reduce TFLite binary size.
    • βž• Add support of local soft device placement for eager op.
    • βž• Add HW acceleration support for LogSoftMax.
    • Added a function nested_value_rowids for ragged tensors.
    • βž• Add guard to avoid acceleration of L2 Normalization with input rank != 4
    • βž• Add tf.math.cumulative_logsumexp operation.
    • βž• Add tf.ragged.stack.
    • πŸ›  Fix memory allocation problem when calling AddNewInputConstantTensor.
    • Delegate application failure leaves interpreter in valid state.
    • βž• Add check for correct memory alignment to MemoryAllocation::MemoryAllocation().
    • Extracts NNAPIDelegateKernel from nnapi_delegate.cc
    • βž• Added support for FusedBatchNormV3 in converter.
    • A ragged to dense op for directly calculating tensors.
    • πŸ›  Fix accidental quadratic graph construction cost in graph-mode tf.gradients().
    • The precision_mode argument to TrtGraphConverter is now case insensitive.

    Thanks to our Contributors

    πŸš€ This release contains contributions from many people at Google, as well as:

    a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo, Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bryan Cutler, candy.dc, Cao Zongyan, Captain-Pool, Casper Da Costa-Luis, Chen Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov, Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon Ito-Fisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, haison, Haraldur TΓ³Mas HallgrΓ­Msson, HarikrishnanBalagopal, HΓ₯Kon Sandsmark, I-Hong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen BΓ©Dorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan, Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik Muthuraman, Kbhute-Ibm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, Leslie-Fang, Leslie-Fang-Intel, Li, Guizi, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret Maynard-Reid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk, musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider, Niklas SilfverstrΓΆM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari, Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, Rajeshwar Reddy T, Ramon ViΓ±As, Rasmus Diederichsen, Reuben Morais, richardbrks, robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool, Sami Kama, Sana-Damani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511, srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, Tae-Hwan Jung, Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek Suryamurthy, Wei Wang, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, Yann-Yy, Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, ηŽ‹ζŒ―εŽ (Zhenhua Wang)