All Versions
30
Latest Version
Avg Release Cycle
37 days
Latest Release
1231 days ago

Changelog History
Page 2

  • v2.2.1 Changes

    September 24, 2020

    🚀 Release 2.2.1

    🐛 Bug Fixes and Other Changes

  • v2.2.0 Changes

    May 06, 2020

    🚀 Release 2.2.0

    ⚡️ TensorFlow 2.2 discontinues support for Python 2, previously announced as following Python 2's EOL on January 1, 2020.

    🚀 Coinciding with this change, new releases of TensorFlow's Docker images provide Python 3 exclusively. Because all images now use Python 3, Docker tags containing -py3 will no longer be provided and existing -py3 tags like latest-py3 will not be updated.

    Major Features and Improvements

    Replaced the scalar type for string tensors from std::string to tensorflow::tstring which is now ABI stable.

    A new Profiler for TF 2 for CPU/GPU/TPU. It offers both device and host performance analysis, including input pipeline and TF Ops. Optimization advisory is provided whenever possible. Please see this tutorial and guide for usage guidelines.

    🗄 Export C++ functions to Python using pybind11 as opposed to SWIG as a part of our deprecation of swig efforts.

    tf.distribute:

    • Support added for global sync BatchNormalization by using the newly added tf.keras.layers.experimental.SyncBatchNormalization layer. This layer will sync BatchNormalization statistics every step across all replicas taking part in sync training.
    • Performance improvements for GPU multi-worker distributed training using tf.distribute.experimental.MultiWorkerMirroredStrategy
      • Update NVIDIA NCCL to 2.5.7-1 for better performance and performance tuning. Please see nccl developer guide for more information on this.
      • Support gradient allreduce in float16. See this example usage.
      • Experimental support of all reduce gradient packing to allow overlapping gradient aggregation with backward path computation.
      • Deprecated experimental_run_v2 method for distribution strategies and renamed the method run as it is no longer experimental.

      - Add CompositeTensor support for DistributedIterators. This should help prevent unnecessary function retracing and memory leaks.

    tf.keras:

    • Model.fit major improvements:
      • You can now use custom training logic with Model.fit by overriding Model.train_step.
      • Easily write state-of-the-art training loops without worrying about all of the features Model.fit handles for you (distribution strategies, callbacks, data formats, looping logic, etc)
      • See the default Model.train_step for an example of what this function should look like. Same applies for validation and inference via Model.test_step and Model.predict_step.
      • SavedModel uses its own Model._saved_model_inputs_spec attr now instead of
        relying on Model.inputs and Model.input_names, which are no longer set for subclass Models.
        This attr is set in eager, tf.function, and graph modes. This gets rid of the need for users to
        manually call Model._set_inputs when using Custom Training Loops(CTLs).
      • Dynamic shapes are supported for generators by calling the Model on the first batch we "peek" from the generator.
        This used to happen implicitly in Model._standardize_user_data. Long-term, a solution where the
        DataAdapter doesn't need to call the Model is probably preferable.
    • The SavedModel format now supports all Keras built-in layers (including metrics, preprocessing layers, and stateful RNN layers)

    - Update Keras batch normalization layer to use the running mean and average computation in the fused_batch_norm. You should see significant performance improvements when using fused_batch_norm in Eager mode.

    tf.lite:

    - Enable TFLite experimental new converter by default.

    XLA

    • XLA now builds and works on windows. All prebuilt packages come with XLA available.
    • XLA can be enabled for a tf.function with “compile or throw exception” semantics on CPU and GPU.

    💥 Breaking Changes

    • tf.keras:
      • In tf.keras.applications the name of the "top" layer has been standardized to "predictions". This is only a problem if your code relies on the exact name of the layer.
      • Huber loss function has been updated to be consistent with other Keras losses. It now computes mean over the last axis of per-sample losses before applying the reduction function.
    • AutoGraph no longer converts functions passed to tf.py_function, tf.py_func and tf.numpy_function.
    • Deprecating XLA_CPU and XLA_GPU devices with this release.
    • Increasing the minimum bazel version to build TF to 2.0.0 to use Bazel's cc_experimental_shared_library.
    • Keras compile/fit behavior for functional and subclassed models have been unified. Model properties such as metrics, metrics_names will now be available only after training/evaluating the model on actual data for functional models. metrics will now include model loss and output losses.loss_functions property has been removed from the model. This was an undocumented property that was accidentally public and has now been removed.

    Known Caveats

    • The current TensorFlow release now requires gast version 0.3.3.

    🐛 Bug Fixes and Other Changes

    • tf.data:
      • Removed autotune_algorithm from experimental optimization options.
    • TF Core:
      • tf.constant always creates CPU tensors irrespective of the current device context.
      • Eager TensorHandles maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution.
      • For tf.Tensor & tf.Variable, .experimental_ref() is no longer experimental and is available as simply .ref().
      • pfor/vectorized_map: Added support for vectorizing 56 more ops. Vectorizing tf.cond is also supported now.
      • Set as much partial shape as we can infer statically within the gradient impl of the gather op.
      • Gradient of tf.while_loop emits StatelessWhile op if cond and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy.
      • Speed up GradientTape in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions.
      • Support back_prop=False in while_v2 but mark it as deprecated.
      • Improve error message when attempting to use None in data-dependent control flow.
      • Add RaggedTensor.numpy().
      • Update RaggedTensor. __getitem__ to preserve uniform dimensions & allow indexing into uniform dimensions.
      • Update tf.expand_dims to always insert the new dimension as a non-ragged dimension.
      • Update tf.embedding_lookup to use partition_strategy and max_norm when ids is ragged.
      • Allow batch_dims==rank(indices) in tf.gather.
      • Add support for bfloat16 in tf.print.
    • tf.distribute:
      • Support embedding_column with variable-length input features for MultiWorkerMirroredStrategy.
    • tf.keras:
      • Added experimental_aggregate_gradients argument to tf.keras.optimizer.Optimizer.apply_gradients. This allows custom gradient aggregation and processing aggregated gradients in custom training loop.
      • Allow pathlib.Path paths for loading models via Keras API.
    • tf.function/AutoGraph:
      • AutoGraph is now available in ReplicaContext.merge_call, Strategy.extended.update and Strategy.extended.update_non_slot.
      • Experimental support for shape invariants has been enabled in tf.function. See the API docs for tf.autograph.experimental.set_loop_options for additonal info.
      • AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
      • Improve shape inference for tf.function input arguments to unlock more Grappler optimizations in TensorFlow 2.x.
      • Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
      • Fix execution order of multiple stateful calls to experimental_run_v2 in tf.function.
      • You can now iterate over RaggedTensors using a for loop inside tf.function.
    • tf.lite:
      • Migrated the tf.lite C inference API out of experimental into lite/c.
      • Add an option to disallow NNAPI CPU / partial acceleration on Android 10
      • TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
      • Refactors the delegate and delegate kernel sources to allow usage in the linter.
      • Limit delegated ops to actually supported ones if a device name is specified or NNAPI CPU Fallback is disabled.
      • TFLite now supports tf.math.reciprocal1 op by lowering to tf.div op.
      • TFLite's unpack op now supports boolean tensor inputs.
      • Microcontroller and embedded code moved from experimental to main TensorFlow Lite folder
      • Check for large TFLite tensors.
      • Fix GPU delegate crash with C++17.
      • Add 5D support to TFLite strided_slice.
      • Fix error in delegation of DEPTH_TO_SPACE to NNAPI causing op not to be accelerated.
      • Fix segmentation fault when running a model with LSTM nodes using NNAPI Delegate
      • Fix NNAPI delegate failure when an operand for Maximum/Minimum operation is a scalar.
      • Fix NNAPI delegate failure when Axis input for reduce operation is a scalar.
      • Expose option to limit the number of partitions that will be delegated to NNAPI.
      • If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
    • tf.random:
      • Various random number generation improvements:
      • Add a fast path for default random_uniform
      • random_seed documentation improvement.
      • RandomBinomial broadcasts and appends the sample shape to the left rather than the right.
      • Added tf.random.stateless_binomial, tf.random.stateless_gamma, tf.random.stateless_poisson
      • tf.random.stateless_uniform now supports unbounded sampling of int types.
    • Math and Linear Algebra:
      • Add tf.linalg.LinearOperatorTridiag.
      • Add LinearOperatorBlockLowerTriangular
      • Add broadcasting support to tf.linalg.triangular_solve#26204, tf.math.invert_permutation.
      • Add tf.math.sobol_sample op.
      • Add tf.math.xlog1py.
      • Add tf.math.special.{dawsn,expi,fresnel_cos,fresnel_sin,spence}.
      • Add a Modified Discrete Cosine Transform (MDCT) and its inverse to tf.signal.
    • TPU Enhancements:
      • Refactor TpuClusterResolver to move shared logic to a separate pip package.
      • Support configuring TPU software version from cloud tpu client.
      • Allowed TPU embedding weight decay factor to be multiplied by learning rate.
    • 👍 XLA Support:
      • Add standalone XLA AOT runtime target + relevant .cc sources to pip package.
      • Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
      • saved_model_cli aot_compile_cpu allows you to compile saved models to XLA header+object files and include them in your C++ programs.
      • Enable Igamma, Igammac for XLA.
    • Deterministic Op Functionality:
      • XLA reduction emitter is deterministic when the environment variable TF_DETERMINISTIC_OPS is set to "true" or "1". This extends deterministic tf.nn.bias_add back-prop functionality (and therefore also deterministic back-prop of bias-addition in Keras layers) to include when XLA JIT complilation is enabled.
      • Fix problem, when running on a CUDA GPU and when either environment variable TF_DETERMINSTIC_OPS or environment variable TF_CUDNN_DETERMINISTIC is set to "true" or "1", in which some layer configurations led to an exception with the message "No algorithm worked!"
    • Tracing and Debugging:
      • Add source, destination name to _send traceme to allow easier debugging.
      • Add traceme event to fastpathexecute.
    • Other:
      • Fix an issue with AUC.reset_states for multi-label AUC #35852
      • Fix the TF upgrade script to not delete files when there is a parsing error and the output mode is in-place.
      • Move tensorflow/core:framework/*_pyclif rules to tensorflow/core/framework:*_pyclif.

    Thanks to our Contributors

    🚀 This release contains contributions from many people at Google, as well as:

    👕 372046933, 8bitmp3, aaronhma, Abin Shahab, Aditya Patwardhan, Agoniii, Ahti Kitsik, Alan Yee, Albin Joy, Alex Hoffman, Alexander Grund, Alexandre E. Eichenberger, Amit Kumar Jaiswal, amoitra, Andrew Anderson, Angus-Luo, Anthony Barbier, Anton Kachatkou, Anuj Rawat, archis, Arpan-Dhatt, Arvind Sundararajan, Ashutosh Hathidara, autoih, Bairen Yi, Balint Cristian, Bas Aarts, BashirSbaiti, Basit Ayantunde, Ben Barsdell, Benjamin Gaillard, boron, Brett Koonce, Bryan Cutler, Christian Goll, Christian Sachs, Clayne Robison, comet, Daniel Falbel, Daria Zhuravleva, darsh8200, David Truby, Dayananda-V, deepakm, Denis Khalikov, Devansh Singh, Dheeraj R Reddy, Diederik Van Liere, Diego Caballero, Dominic Jack, dothinking, Douman, Drake Gens, Duncan Riach, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, elzino, Ending2015a, Eric Schweitz, Erik Zettel, Ethan Saadia, Eugene Kuznetsov, Evgeniy Zheltonozhskiy, Ewout Ter Hoeven, exfalso, FAIJUL, Fangjun Kuang, Fei Hu, Frank Laub, Frederic Bastien, Fredrik Knutsson, frreiss, Frédéric Rechtenstein, fsx950223, Gaurav Singh, gbaned, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, Hans Gaiser, Hans Pabst, Haoyu Wu, Harry Slatyer, hsahovic, Hugo, Hugo Sjöberg, IrinaM21, jacco, Jake Tae, Jean-Denis Lesage, Jean-Michel Gorius, Jeff Daily, Jens Elofsson, Jerry Shih, jerryyin, Jin Mingjian, Jinjing Zhou, JKIsaacLee, jojimonv, Jonathan Dekhtiar, Jose Ignacio Gomez, Joseph-Rance, Judd, Julian Gross, Kaixi Hou, Kaustubh Maske Patil, Keunwoo Choi, Kevin Hanselman, Khor Chean Wei, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koki Ibukuro, Kristian Holsheimer, kurileo, Lakshay Tokas, Lee Netherton, leike666666, Leslie-Fang-Intel, Li, Guizi, LIUJIAN435, Lukas Geiger, Lyo Nguyen, madisetti, Maher Jendoubi, Mahmoud Abuzaina, Manuel Freiberger, Marcel Koester, Marco Jacopo Ferrarotti, Markus Franke, marload, Mbah-Javis, mbhuiyan, Meng Zhang, Michael Liao, MichaelKonobeev, Michal Tarnowski, Milan Straka, minoring, Mohamed Nour Abouelseoud, MoussaMM, Mrinal Jain, mrTsjolder, Måns Nilsson, Namrata Bhave, Nicholas Gao, Niels Ole Salscheider, nikochiko, Niranjan Hasabnis, Nishidha Panpaliya, nmostafa, Noah Trenaman, nuka137, Officium, Owen L - Sfe, Pallavi G, Paul Andrey, Peng Sun, Peng Wu, Phil Pearl, PhilipMay, pingsutw, Pooya Davoodi, PragmaTwice, pshiko, Qwerty71, R Gomathi, Rahul Huilgol, Richard Xiao, Rick Wierenga, Roberto Rosmaninho, ruchit2801, Rushabh Vasani, Sami, Sana Damani, Sarvesh Dubey, Sasan Jafarnejad, Sergii Khomenko, Shane Smiskol, Shaochen Shi, sharkdtu, Shawn Presser, ShengYang1, Shreyash Patodia, Shyam Sundar Dhanabalan, Siju Samuel, Somyajit Chakraborty Sam, Srihari Humbarwadi, srinivasan.narayanamoorthy, Srishti Yadav, Steph-En-M, Stephan Uphoff, Stephen Mugisha, SumanSudhir, Taehun Kim, Tamas Bela Feher, TengLu, Tetragramm, Thierry Herrmann, Tian Jin, tigertang, Tom Carchrae, Tom Forbes, Trent Lo, Victor Peng, vijayphoenix, Vincent Abriou, Vishal Bhola, Vishnuvardhan Janapati, vladbataev, VoVAllen, Wallyss Lima, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, William Zhang, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, Yasir Modak, Yasuhiro Matsumoto, Yaxun (Sam) Liu, Yong Tang, Ytyt-Yt, yuan, Yuan Mingshuai, Yuan Tang, Yuki Ueda, Yusup, zhangshijin, zhuwenxi

  • v2.2.0-rc4 Changes

    April 30, 2020

    🚀 Release 2.2.0

    ⚡️ TensorFlow 2.2 discontinues support for Python 2, previously announced as following Python 2's EOL on January 1, 2020.

    🚀 Coinciding with this change, new releases of TensorFlow's Docker images provide Python 3 exclusively. Because all images now use Python 3, Docker tags containing -py3 will no longer be provided and existing -py3 tags like latest-py3 will not be updated.

    Major Features and Improvements

    Replaced the scalar type for string tensors from std::string to tensorflow::tstring which is now ABI stable.

    A new Profiler for TF 2 for CPU/GPU/TPU. It offers both device and host performance analysis, including input pipeline and TF Ops. Optimization advisory is provided whenever possible. Please see this tutorial and guide for usage guidelines.

    🗄 Export C++ functions to Python using pybind11 as opposed to SWIG as a part of our deprecation of swig efforts.

    tf.distribute:

    • Support added for global sync BatchNormalization by using the newly added tf.keras.layers.experimental.SyncBatchNormalization layer. This layer will sync BatchNormalization statistics every step across all replicas taking part in sync training.
    • Performance improvements for GPU multi-worker distributed training using tf.distribute.experimental.MultiWorkerMirroredStrategy
      • Update NVIDIA NCCL to 2.5.7-1 for better performance and performance tuning. Please see nccl developer guide for more information on this.
      • Support gradient allreduce in float16. See this example usage.
      • Experimental support of all reduce gradient packing to allow overlapping gradient aggregation with backward path computation.
      • Deprecated experimental_run_v2 method for distribution strategies and renamed the method run as it is no longer experimental.

      - Add CompositeTensor support for DistributedIterators. This should help prevent unnecessary function retracing and memory leaks.

    tf.keras:

    • Model.fit major improvements:
      • You can now use custom training logic with Model.fit by overriding Model.train_step.
      • Easily write state-of-the-art training loops without worrying about all of the features Model.fit handles for you (distribution strategies, callbacks, data formats, looping logic, etc)
      • See the default Model.train_step for an example of what this function should look like. Same applies for validation and inference via Model.test_step and Model.predict_step.
      • SavedModel uses its own Model._saved_model_inputs_spec attr now instead of
        relying on Model.inputs and Model.input_names, which are no longer set for subclass Models.
        This attr is set in eager, tf.function, and graph modes. This gets rid of the need for users to
        manually call Model._set_inputs when using Custom Training Loops(CTLs).
      • Dynamic shapes are supported for generators by calling the Model on the first batch we "peek" from the generator.
        This used to happen implicitly in Model._standardize_user_data. Long-term, a solution where the
        DataAdapter doesn't need to call the Model is probably preferable.
    • The SavedModel format now supports all Keras built-in layers (including metrics, preprocessing layers, and stateful RNN layers)

    - Update Keras batch normalization layer to use the running mean and average computation in the fused_batch_norm. You should see significant performance improvements when using fused_batch_norm in Eager mode.

    tf.lite:

    - Enable TFLite experimental new converter by default.

    XLA

    • XLA now builds and works on windows. All prebuilt packages come with XLA available.
    • XLA can be enabled for a tf.function with “compile or throw exception” semantics on CPU and GPU.

    💥 Breaking Changes

    • tf.keras:
      • In tf.keras.applications the name of the "top" layer has been standardized to "predictions". This is only a problem if your code relies on the exact name of the layer.
      • Huber loss function has been updated to be consistent with other Keras losses. It now computes mean over the last axis of per-sample losses before applying the reduction function.
    • AutoGraph no longer converts functions passed to tf.py_function, tf.py_func and tf.numpy_function.
    • Deprecating XLA_CPU and XLA_GPU devices with this release.
    • Increasing the minimum bazel version to build TF to 2.0.0 to use Bazel's cc_experimental_shared_library.
    • Keras compile/fit behavior for functional and subclassed models have been unified. Model properties such as metrics, metrics_names will now be available only after training/evaluating the model on actual data for functional models. metrics will now include model loss and output losses.loss_functions property has been removed from the model. This was an undocumented property that was accidentally public and has now been removed.

    Known Caveats

    • The current TensorFlow release now requires gast version 0.3.3.
    • There is a known issue that might surface with CompositeTensor on TPU pods. As a temporary workaround you can set _enable_legacy_iterators to True.

    🐛 Bug Fixes and Other Changes

    • tf.data:
      • Removed autotune_algorithm from experimental optimization options.
    • TF Core:
      • tf.constant always creates CPU tensors irrespective of the current device context.
      • Eager TensorHandles maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution.
      • For tf.Tensor & tf.Variable, .experimental_ref() is no longer experimental and is available as simply .ref().
      • pfor/vectorized_map: Added support for vectorizing 56 more ops. Vectorizing tf.cond is also supported now.
      • Set as much partial shape as we can infer statically within the gradient impl of the gather op.
      • Gradient of tf.while_loop emits StatelessWhile op if cond and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy.
      • Speed up GradientTape in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions.
      • Support back_prop=False in while_v2 but mark it as deprecated.
      • Improve error message when attempting to use None in data-dependent control flow.
      • Add RaggedTensor.numpy().
      • Update RaggedTensor. __getitem__ to preserve uniform dimensions & allow indexing into uniform dimensions.
      • Update tf.expand_dims to always insert the new dimension as a non-ragged dimension.
      • Update tf.embedding_lookup to use partition_strategy and max_norm when ids is ragged.
      • Allow batch_dims==rank(indices) in tf.gather.
      • Add support for bfloat16 in tf.print.
    • tf.distribute:
      • Support embedding_column with variable-length input features for MultiWorkerMirroredStrategy.
    • tf.keras:
      • Added experimental_aggregate_gradients argument to tf.keras.optimizer.Optimizer.apply_gradients. This allows custom gradient aggregation and processing aggregated gradients in custom training loop.
      • Allow pathlib.Path paths for loading models via Keras API.
    • tf.function/AutoGraph:
      • AutoGraph is now available in ReplicaContext.merge_call, Strategy.extended.update and Strategy.extended.update_non_slot.
      • Experimental support for shape invariants has been enabled in tf.function. See the API docs for tf.autograph.experimental.set_loop_options for additonal info.
      • AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
      • Improve shape inference for tf.function input arguments to unlock more Grappler optimizations in TensorFlow 2.x.
      • Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
      • Fix execution order of multiple stateful calls to experimental_run_v2 in tf.function.
      • You can now iterate over RaggedTensors using a for loop inside tf.function.
    • tf.lite:
      • Migrated the tf.lite C inference API out of experimental into lite/c.
      • Add an option to disallow NNAPI CPU / partial acceleration on Android 10
      • TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
      • Refactors the delegate and delegate kernel sources to allow usage in the linter.
      • Limit delegated ops to actually supported ones if a device name is specified or NNAPI CPU Fallback is disabled.
      • TFLite now supports tf.math.reciprocal1 op by lowering to tf.div op.
      • TFLite's unpack op now supports boolean tensor inputs.
      • Microcontroller and embedded code moved from experimental to main TensorFlow Lite folder
      • Check for large TFLite tensors.
      • Fix GPU delegate crash with C++17.
      • Add 5D support to TFLite strided_slice.
      • Fix error in delegation of DEPTH_TO_SPACE to NNAPI causing op not to be accelerated.
      • Fix segmentation fault when running a model with LSTM nodes using NNAPI Delegate
      • Fix NNAPI delegate failure when an operand for Maximum/Minimum operation is a scalar.
      • Fix NNAPI delegate failure when Axis input for reduce operation is a scalar.
      • Expose option to limit the number of partitions that will be delegated to NNAPI.
      • If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
    • tf.random:
      • Various random number generation improvements:
      • Add a fast path for default random_uniform
      • random_seed documentation improvement.
      • RandomBinomial broadcasts and appends the sample shape to the left rather than the right.
      • Added tf.random.stateless_binomial, tf.random.stateless_gamma, tf.random.stateless_poisson
      • tf.random.stateless_uniform now supports unbounded sampling of int types.
    • Math and Linear Algebra:
      • Add tf.linalg.LinearOperatorTridiag.
      • Add LinearOperatorBlockLowerTriangular
      • Add broadcasting support to tf.linalg.triangular_solve#26204, tf.math.invert_permutation.
      • Add tf.math.sobol_sample op.
      • Add tf.math.xlog1py.
      • Add tf.math.special.{dawsn,expi,fresnel_cos,fresnel_sin,spence}.
      • Add a Modified Discrete Cosine Transform (MDCT) and its inverse to tf.signal.
    • TPU Enhancements:
      • Refactor TpuClusterResolver to move shared logic to a separate pip package.
      • Support configuring TPU software version from cloud tpu client.
      • Allowed TPU embedding weight decay factor to be multiplied by learning rate.
    • 👍 XLA Support:
      • Add standalone XLA AOT runtime target + relevant .cc sources to pip package.
      • Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
      • saved_model_cli aot_compile_cpu allows you to compile saved models to XLA header+object files and include them in your C++ programs.
      • Enable Igamma, Igammac for XLA.
    • Deterministic Op Functionality:
      • XLA reduction emitter is deterministic when the environment variable TF_DETERMINISTIC_OPS is set to "true" or "1". This extends deterministic tf.nn.bias_add back-prop functionality (and therefore also deterministic back-prop of bias-addition in Keras layers) to include when XLA JIT complilation is enabled.
      • Fix problem, when running on a CUDA GPU and when either environment variable TF_DETERMINSTIC_OPS or environment variable TF_CUDNN_DETERMINISTIC is set to "true" or "1", in which some layer configurations led to an exception with the message "No algorithm worked!"
    • Tracing and Debugging:
      • Add source, destination name to _send traceme to allow easier debugging.
      • Add traceme event to fastpathexecute.
    • Other:
      • Fix an issue with AUC.reset_states for multi-label AUC #35852
      • Fix the TF upgrade script to not delete files when there is a parsing error and the output mode is in-place.
      • Move tensorflow/core:framework/*_pyclif rules to tensorflow/core/framework:*_pyclif.

    Thanks to our Contributors

    🚀 This release contains contributions from many people at Google, as well as:

    👕 372046933, 8bitmp3, aaronhma, Abin Shahab, Aditya Patwardhan, Agoniii, Ahti Kitsik, Alan Yee, Albin Joy, Alex Hoffman, Alexander Grund, Alexandre E. Eichenberger, Amit Kumar Jaiswal, amoitra, Andrew Anderson, Angus-Luo, Anthony Barbier, Anton Kachatkou, Anuj Rawat, archis, Arpan-Dhatt, Arvind Sundararajan, Ashutosh Hathidara, autoih, Bairen Yi, Balint Cristian, Bas Aarts, BashirSbaiti, Basit Ayantunde, Ben Barsdell, Benjamin Gaillard, boron, Brett Koonce, Bryan Cutler, Christian Goll, Christian Sachs, Clayne Robison, comet, Daniel Falbel, Daria Zhuravleva, darsh8200, David Truby, Dayananda-V, deepakm, Denis Khalikov, Devansh Singh, Dheeraj R Reddy, Diederik Van Liere, Diego Caballero, Dominic Jack, dothinking, Douman, Drake Gens, Duncan Riach, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, elzino, Ending2015a, Eric Schweitz, Erik Zettel, Ethan Saadia, Eugene Kuznetsov, Evgeniy Zheltonozhskiy, Ewout Ter Hoeven, exfalso, FAIJUL, Fangjun Kuang, Fei Hu, Frank Laub, Frederic Bastien, Fredrik Knutsson, frreiss, Frédéric Rechtenstein, fsx950223, Gaurav Singh, gbaned, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, Hans Gaiser, Hans Pabst, Haoyu Wu, Harry Slatyer, hsahovic, Hugo, Hugo Sjöberg, IrinaM21, jacco, Jake Tae, Jean-Denis Lesage, Jean-Michel Gorius, Jeff Daily, Jens Elofsson, Jerry Shih, jerryyin, Jin Mingjian, Jinjing Zhou, JKIsaacLee, jojimonv, Jonathan Dekhtiar, Jose Ignacio Gomez, Joseph-Rance, Judd, Julian Gross, Kaixi Hou, Kaustubh Maske Patil, Keunwoo Choi, Kevin Hanselman, Khor Chean Wei, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koki Ibukuro, Kristian Holsheimer, kurileo, Lakshay Tokas, Lee Netherton, leike666666, Leslie-Fang-Intel, Li, Guizi, LIUJIAN435, Lukas Geiger, Lyo Nguyen, madisetti, Maher Jendoubi, Mahmoud Abuzaina, Manuel Freiberger, Marcel Koester, Marco Jacopo Ferrarotti, Markus Franke, marload, Mbah-Javis, mbhuiyan, Meng Zhang, Michael Liao, MichaelKonobeev, Michal Tarnowski, Milan Straka, minoring, Mohamed Nour Abouelseoud, MoussaMM, Mrinal Jain, mrTsjolder, Måns Nilsson, Namrata Bhave, Nicholas Gao, Niels Ole Salscheider, nikochiko, Niranjan Hasabnis, Nishidha Panpaliya, nmostafa, Noah Trenaman, nuka137, Officium, Owen L - Sfe, Pallavi G, Paul Andrey, Peng Sun, Peng Wu, Phil Pearl, PhilipMay, pingsutw, Pooya Davoodi, PragmaTwice, pshiko, Qwerty71, R Gomathi, Rahul Huilgol, Richard Xiao, Rick Wierenga, Roberto Rosmaninho, ruchit2801, Rushabh Vasani, Sami, Sana Damani, Sarvesh Dubey, Sasan Jafarnejad, Sergii Khomenko, Shane Smiskol, Shaochen Shi, sharkdtu, Shawn Presser, ShengYang1, Shreyash Patodia, Shyam Sundar Dhanabalan, Siju Samuel, Somyajit Chakraborty Sam, Srihari Humbarwadi, srinivasan.narayanamoorthy, Srishti Yadav, Steph-En-M, Stephan Uphoff, Stephen Mugisha, SumanSudhir, Taehun Kim, Tamas Bela Feher, TengLu, Tetragramm, Thierry Herrmann, Tian Jin, tigertang, Tom Carchrae, Tom Forbes, Trent Lo, Victor Peng, vijayphoenix, Vincent Abriou, Vishal Bhola, Vishnuvardhan Janapati, vladbataev, VoVAllen, Wallyss Lima, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, William Zhang, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, Yasir Modak, Yasuhiro Matsumoto, Yaxun (Sam) Liu, Yong Tang, Ytyt-Yt, yuan, Yuan Mingshuai, Yuan Tang, Yuki Ueda, Yusup, zhangshijin, zhuwenxi

  • v2.2.0-rc3 Changes

    April 14, 2020

    🚀 Release 2.2.0

    Major Features and Improvements

    Replaced the scalar type for string tensors from std::string to tensorflow::tstring which is now ABI stable.

    A new Profiler for TF 2 for CPU/GPU/TPU. It offers both device and host performance analysis, including input pipeline and TF Ops. Optimization advisory is provided whenever possible. Please see this tutorial and guide for usage guidelines.

    🗄 Export C++ functions to Python using pybind11 as opposed to SWIG as a part of our deprecation of swig efforts.

    tf.distribute:

    • Support added for global sync BatchNormalization by using the newly added tf.keras.layers.experimental.SyncBatchNormalization layer. This layer will sync BatchNormalization statistics every step across all replicas taking part in sync training.
    • Performance improvements for GPU multi-worker distributed training using tf.distribute.experimental.MultiWorkerMirroredStrategy
      • Update NVIDIA NCCL to 2.5.7-1 for better performance and performance tuning. Please see nccl developer guide for more information on this.
      • Support gradient allreduce in float16. See this example usage.
      • Experimental support of all reduce gradient packing to allow overlapping gradient aggregation with backward path computation.

      - Deprecated experimental_run_v2 method for distribution strategies and renamed the method run as it is no longer experimental.

    tf.keras:

    • Model.fit major improvements:
      • You can now use custom training logic with Model.fit by overriding Model.train_step.
      • Easily write state-of-the-art training loops without worrying about all of the features Model.fit handles for you (distribution strategies, callbacks, data formats, looping logic, etc)
      • See the default Model.train_step for an example of what this function should look like. Same applies for validation and inference via Model.test_step and Model.predict_step.
      • SavedModel uses its own Model._saved_model_inputs_spec attr now instead of
        relying on Model.inputs and Model.input_names, which are no longer set for subclass Models.
        This attr is set in eager, tf.function, and graph modes. This gets rid of the need for users to
        manually call Model._set_inputs when using Custom Training Loops(CTLs).
      • Dynamic shapes are supported for generators by calling the Model on the first batch we "peek" from the generator.
        This used to happen implicitly in Model._standardize_user_data. Long-term, a solution where the
        DataAdapter doesn't need to call the Model is probably preferable.
    • The SavedModel format now supports all Keras built-in layers (including metrics, preprocessing layers, and stateful RNN layers)

    - Update Keras batch normalization layer to use the running mean and average computation in the fused_batch_norm. You should see significant performance improvements when using fused_batch_norm in Eager mode.

    tf.lite:

    - Enable TFLite experimental new converter by default.

    XLA

    • XLA now builds and works on windows. All prebuilt packages come with XLA available.
    • XLA can be enabled for a tf.function with “compile or throw exception” semantics on CPU and GPU.

    💥 Breaking Changes

    • tf.keras:
      • In tf.keras.applications the name of the "top" layer has been standardized to "predictions". This is only a problem if your code relies on the exact name of the layer.
      • Huber loss function has been updated to be consistent with other Keras losses. It now computes mean over the last axis of per-sample losses before applying the reduction function.
    • AutoGraph no longer converts functions passed to tf.py_function, tf.py_func and tf.numpy_function.
    • Deprecating XLA_CPU and XLA_GPU devices with this release.
    • Increasing the minimum bazel version to build TF to 2.0.0 to use Bazel's cc_experimental_shared_library.
    • Keras compile/fit behavior for functional and subclassed models have been unified. Model properties such as metrics, metrics_names will now be available only after training/evaluating the model on actual data for functional models. metrics will now include model loss and output losses.loss_functions property has been removed from the model. This was an undocumented property that was accidentally public and has now been removed.

    Known Caveats

    • 🚀 Due to certain unforeseen circumstances, we are unable to release MacOS py3.8 binaries, but Windows/Linux binaries for py3.8 are available.
    • The current TensorFlow release now requires gast version 0.3.3.

    🐛 Bug Fixes and Other Changes

    • tf.data:
      • Removed autotune_algorithm from experimental optimization options.
    • TF Core:
      • tf.constant always creates CPU tensors irrespective of the current device context.
      • Eager TensorHandles maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution.
      • For tf.Tensor & tf.Variable, .experimental_ref() is no longer experimental and is available as simply .ref().
      • pfor/vectorized_map: Added support for vectorizing 56 more ops. Vectorizing tf.cond is also supported now.
      • Set as much partial shape as we can infer statically within the gradient impl of the gather op.
      • Gradient of tf.while_loop emits StatelessWhile op if cond and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy.
      • Speed up GradientTape in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions.
      • Support back_prop=False in while_v2 but mark it as deprecated.
      • Improve error message when attempting to use None in data-dependent control flow.
      • Add RaggedTensor.numpy().
      • Update RaggedTensor. __getitem__ to preserve uniform dimensions & allow indexing into uniform dimensions.
      • Update tf.expand_dims to always insert the new dimension as a non-ragged dimension.
      • Update tf.embedding_lookup to use partition_strategy and max_norm when ids is ragged.
      • Allow batch_dims==rank(indices) in tf.gather.
      • Add support for bfloat16 in tf.print.
    • tf.distribute:
      • Support embedding_column with variable-length input features for MultiWorkerMirroredStrategy.
    • tf.keras:
      • Added experimental_aggregate_gradients argument to tf.keras.optimizer.Optimizer.apply_gradients. This allows custom gradient aggregation and processing aggregated gradients in custom training loop.
      • Allow pathlib.Path paths for loading models via Keras API.
    • tf.function/AutoGraph:
      • AutoGraph is now available in ReplicaContext.merge_call, Strategy.extended.update and Strategy.extended.update_non_slot.
      • Experimental support for shape invariants has been enabled in tf.function. See the API docs for tf.autograph.experimental.set_loop_options for additonal info.
      • AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
      • Improve shape inference for tf.function input arguments to unlock more Grappler optimizations in TensorFlow 2.x.
      • Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
      • Fix execution order of multiple stateful calls to experimental_run_v2 in tf.function.
      • You can now iterate over RaggedTensors using a for loop inside tf.function.
    • tf.lite:
      • Migrated the tf.lite C inference API out of experimental into lite/c.
      • Add an option to disallow NNAPI CPU / partial acceleration on Android 10
      • TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
      • Refactors the delegate and delegate kernel sources to allow usage in the linter.
      • Limit delegated ops to actually supported ones if a device name is specified or NNAPI CPU Fallback is disabled.
      • TFLite now supports tf.math.reciprocal1 op by lowering to tf.div op.
      • TFLite's unpack op now supports boolean tensor inputs.
      • Microcontroller and embedded code moved from experimental to main TensorFlow Lite folder
      • Check for large TFLite tensors.
      • Fix GPU delegate crash with C++17.
      • Add 5D support to TFLite strided_slice.
      • Fix error in delegation of DEPTH_TO_SPACE to NNAPI causing op not to be accelerated.
      • Fix segmentation fault when running a model with LSTM nodes using NNAPI Delegate
      • Fix NNAPI delegate failure when an operand for Maximum/Minimum operation is a scalar.
      • Fix NNAPI delegate failure when Axis input for reduce operation is a scalar.
      • Expose option to limit the number of partitions that will be delegated to NNAPI.
      • If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
    • tf.random:
      • Various random number generation improvements:
      • Add a fast path for default random_uniform
      • random_seed documentation improvement.
      • RandomBinomial broadcasts and appends the sample shape to the left rather than the right.
      • Added tf.random.stateless_binomial, tf.random.stateless_gamma, tf.random.stateless_poisson
      • tf.random.stateless_uniform now supports unbounded sampling of int types.
    • Math and Linear Algebra:
      • Add tf.linalg.LinearOperatorTridiag.
      • Add LinearOperatorBlockLowerTriangular
      • Add broadcasting support to tf.linalg.triangular_solve#26204, tf.math.invert_permutation.
      • Add tf.math.sobol_sample op.
      • Add tf.math.xlog1py.
      • Add tf.math.special.{dawsn,expi,fresnel_cos,fresnel_sin,spence}.
      • Add a Modified Discrete Cosine Transform (MDCT) and its inverse to tf.signal.
    • TPU Enhancements:
      • Refactor TpuClusterResolver to move shared logic to a separate pip package.
      • Support configuring TPU software version from cloud tpu client.
      • Allowed TPU embedding weight decay factor to be multiplied by learning rate.
    • 👍 XLA Support:
      • Add standalone XLA AOT runtime target + relevant .cc sources to pip package.
      • Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
      • saved_model_cli aot_compile_cpu allows you to compile saved models to XLA header+object files and include them in your C++ programs.
      • Enable Igamma, Igammac for XLA.
      • XLA reduction emitter is deterministic when the environment variable TF_DETERMINISTIC_OPS is set.
    • Tracing and Debugging:
      • Add source, destination name to _send traceme to allow easier debugging.
      • Add traceme event to fastpathexecute.
    • Other:
      • Fix an issue with AUC.reset_states for multi-label AUC #35852
      • Fix the TF upgrade script to not delete files when there is a parsing error and the output mode is in-place.
      • Move tensorflow/core:framework/*_pyclif rules to tensorflow/core/framework:*_pyclif.

    Thanks to our Contributors

    🚀 This release contains contributions from many people at Google, as well as:

    👕 372046933, 8bitmp3, aaronhma, Abin Shahab, Aditya Patwardhan, Agoniii, Ahti Kitsik, Alan Yee, Albin Joy, Alex Hoffman, Alexander Grund, Alexandre E. Eichenberger, Amit Kumar Jaiswal, amoitra, Andrew Anderson, Angus-Luo, Anthony Barbier, Anton Kachatkou, Anuj Rawat, archis, Arpan-Dhatt, Arvind Sundararajan, Ashutosh Hathidara, autoih, Bairen Yi, Balint Cristian, Bas Aarts, BashirSbaiti, Basit Ayantunde, Ben Barsdell, Benjamin Gaillard, boron, Brett Koonce, Bryan Cutler, Christian Goll, Christian Sachs, Clayne Robison, comet, Daniel Falbel, Daria Zhuravleva, darsh8200, David Truby, Dayananda-V, deepakm, Denis Khalikov, Devansh Singh, Dheeraj R Reddy, Diederik Van Liere, Diego Caballero, Dominic Jack, dothinking, Douman, Drake Gens, Duncan Riach, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, elzino, Ending2015a, Eric Schweitz, Erik Zettel, Ethan Saadia, Eugene Kuznetsov, Evgeniy Zheltonozhskiy, Ewout Ter Hoeven, exfalso, FAIJUL, Fangjun Kuang, Fei Hu, Frank Laub, Frederic Bastien, Fredrik Knutsson, frreiss, Frédéric Rechtenstein, fsx950223, Gaurav Singh, gbaned, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, Hans Gaiser, Hans Pabst, Haoyu Wu, Harry Slatyer, hsahovic, Hugo, Hugo Sjöberg, IrinaM21, jacco, Jake Tae, Jean-Denis Lesage, Jean-Michel Gorius, Jeff Daily, Jens Elofsson, Jerry Shih, jerryyin, Jin Mingjian, Jinjing Zhou, JKIsaacLee, jojimonv, Jonathan Dekhtiar, Jose Ignacio Gomez, Joseph-Rance, Judd, Julian Gross, Kaixi Hou, Kaustubh Maske Patil, Keunwoo Choi, Kevin Hanselman, Khor Chean Wei, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koki Ibukuro, Kristian Holsheimer, kurileo, Lakshay Tokas, Lee Netherton, leike666666, Leslie-Fang-Intel, Li, Guizi, LIUJIAN435, Lukas Geiger, Lyo Nguyen, madisetti, Maher Jendoubi, Mahmoud Abuzaina, Manuel Freiberger, Marcel Koester, Marco Jacopo Ferrarotti, Markus Franke, marload, Mbah-Javis, mbhuiyan, Meng Zhang, Michael Liao, MichaelKonobeev, Michal Tarnowski, Milan Straka, minoring, Mohamed Nour Abouelseoud, MoussaMM, Mrinal Jain, mrTsjolder, Måns Nilsson, Namrata Bhave, Nicholas Gao, Niels Ole Salscheider, nikochiko, Niranjan Hasabnis, Nishidha Panpaliya, nmostafa, Noah Trenaman, nuka137, Officium, Owen L - Sfe, Pallavi G, Paul Andrey, Peng Sun, Peng Wu, Phil Pearl, PhilipMay, pingsutw, Pooya Davoodi, PragmaTwice, pshiko, Qwerty71, R Gomathi, Rahul Huilgol, Richard Xiao, Rick Wierenga, Roberto Rosmaninho, ruchit2801, Rushabh Vasani, Sami, Sana Damani, Sarvesh Dubey, Sasan Jafarnejad, Sergii Khomenko, Shane Smiskol, Shaochen Shi, sharkdtu, Shawn Presser, ShengYang1, Shreyash Patodia, Shyam Sundar Dhanabalan, Siju Samuel, Somyajit Chakraborty Sam, Srihari Humbarwadi, srinivasan.narayanamoorthy, Srishti Yadav, Steph-En-M, Stephan Uphoff, Stephen Mugisha, SumanSudhir, Taehun Kim, Tamas Bela Feher, TengLu, Tetragramm, Thierry Herrmann, Tian Jin, tigertang, Tom Carchrae, Tom Forbes, Trent Lo, Victor Peng, vijayphoenix, Vincent Abriou, Vishal Bhola, Vishnuvardhan Janapati, vladbataev, VoVAllen, Wallyss Lima, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, William Zhang, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, Yasir Modak, Yasuhiro Matsumoto, Yaxun (Sam) Liu, Yong Tang, Ytyt-Yt, yuan, Yuan Mingshuai, Yuan Tang, Yuki Ueda, Yusup, zhangshijin, zhuwenxi

  • v2.2.0-rc2 Changes

    March 27, 2020

    🚀 Release 2.2.0

    Major Features and Improvements

    • Replaced the scalar type for string tensors from std::string to tensorflow::tstring which is now ABI stable.
    • A new Profiler for TF 2 for CPU/GPU/TPU. It offers both device and host performance analysis, including input pipeline and TF Ops. Optimization advisory is provided whenever possible. Please see this tutorial and guide for usage guidelines.
    • 🗄 Export C++ functions to Python using pybind11 as opposed to SWIG as a part of our deprecation of swig efforts.
    • tf.distribute:
      • Support added for global sync BatchNormalization by using the newly added tf.keras.layers.experimental.SyncBatchNormalization layer. This layer will sync BatchNormalization statistics every step across all replicas taking part in sync training.
      • Performance improvements for GPU multi-worker distributed training using tf.distribute.experimental.MultiWorkerMirroredStrategy
      • Update NVIDIA NCCL to 2.5.7-1 for better performance and performance tuning. Please see nccl developer guide for more information on this.
      • Support gradient allreduce in float16. See this example usage.
      • Experimental support of all reduce gradient packing to allow overlapping gradient aggregation with backward path computation.
      • Deprecated experimental_run_v2 method for distribution strategies and renamed the method run as it is no longer experimental.
    • tf.keras:
      • Model.fit major improvements:
      • You can now use custom training logic with Model.fit by overriding Model.train_step.
      • Easily write state-of-the-art training loops without worrying about all of the features Model.fit handles for you (distribution strategies, callbacks, data formats, looping logic, etc)
      • See the default Model.train_step for an example of what this function should look like
      • Same applies for validation and inference via Model.test_step and Model.predict_step
      • The SavedModel format now supports all Keras built-in layers (including metrics, preprocessing layers, and stateful RNN layers)
    • tf.lite:
      • Enable TFLite experimental new converter by default.
    • XLA
      • XLA now builds and works on windows. All prebuilt packages come with XLA available.
      • XLA can be enabled for a tf.function with “compile or throw exception” semantics on CPU and GPU.

    💥 Breaking Changes

    • tf.keras:
      • In tf.keras.applications the name of the "top" layer has been standardized to "predictions". This is only a problem if your code relies on the exact name of the layer.
      • Huber loss function has been updated to be consistent with other Keras losses. It now computes mean over the last axis of per-sample losses before applying the reduction function.
    • AutoGraph no longer converts functions passed to tf.py_function, tf.py_func and tf.numpy_function.
    • Deprecating XLA_CPU and XLA_GPU devices with this release.
    • Increasing the minimum bazel version to build TF to 2.0.0 to use Bazel's cc_experimental_shared_library.

    Known Caveats

    • 🚀 Due to certain unforeseen circumstances, we are unable to release MacOS py3.8 binaries, but Windows/Linux binaries for py3.8 are available.
    • The current TensorFlow release now requires gast version 0.3.3.

    🐛 Bug Fixes and Other Changes

    • tf.data:
      • Removed autotune_algorithm from experimental optimization options.
    • TF Core:
      • tf.constant always creates CPU tensors irrespective of the current device context.
      • Eager TensorHandles maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution.
      • For tf.Tensor & tf.Variable, .experimental_ref() is no longer experimental and is available as simply .ref().
      • pfor/vectorized_map: Added support for vectorizing 56 more ops. Vectorizing tf.cond is also supported now.
      • Set as much partial shape as we can infer statically within the gradient impl of the gather op.
      • Gradient of tf.while_loop emits StatelessWhile op if cond and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy.
      • Speed up GradientTape in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions.
      • Support back_prop=False in while_v2 but mark it as deprecated.
      • Improve error message when attempting to use None in data-dependent control flow.
      • Add RaggedTensor.numpy().
      • Update RaggedTensor. __getitem__ to preserve uniform dimensions & allow indexing into uniform dimensions.
      • Update tf.expand_dims to always insert the new dimension as a non-ragged dimension.
      • Update tf.embedding_lookup to use partition_strategy and max_norm when ids is ragged.
      • Allow batch_dims==rank(indices) in tf.gather.
      • Add support for bfloat16 in tf.print.
    • tf.distribute:
      • Support embedding_column with variable-length input features for MultiWorkerMirroredStrategy.
    • tf.keras:
      • Added all_reduce_sum_gradients argument to tf.keras.optimizer.Optimizer.apply_gradients. This allows custom gradient aggregation and processing aggregated gradients in custom training loop.
      • Allow pathlib.Path paths for loading models via Keras API.
    • tf.function/AutoGraph:
      • AutoGraph is now available in ReplicaContext.merge_call, Strategy.extended.update and Strategy.extended.update_non_slot.
      • Experimental support for shape invariants has been enabled in tf.function. See the API docs for tf.autograph.experimental.set_loop_options for additonal info.
      • AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
      • Improve shape inference for tf.function input arguments to unlock more Grappler optimizations in TensorFlow 2.x.
      • Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
      • Fix execution order of multiple stateful calls to experimental_run_v2 in tf.function.
      • You can now iterate over RaggedTensors using a for loop inside tf.function.
    • tf.lite:
      • Migrated the tf.lite C inference API out of experimental into lite/c.
      • Add an option to disallow NNAPI CPU / partial acceleration on Android 10
      • TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
      • Refactors the delegate and delegate kernel sources to allow usage in the linter.
      • Limit delegated ops to actually supported ones if a device name is specified or NNAPI CPU Fallback is disabled.
      • TFLite now supports tf.math.reciprocal1 op by lowering to tf.div op.
      • TFLite's unpack op now supports boolean tensor inputs.
      • Microcontroller and embedded code moved from experimental to main TensorFlow Lite folder
      • Check for large TFLite tensors.
      • Fix GPU delegate crash with C++17.
      • Add 5D support to TFLite strided_slice.
      • Fix error in delegation of DEPTH_TO_SPACE to NNAPI causing op not to be accelerated.
      • Fix segmentation fault when running a model with LSTM nodes using NNAPI Delegate
      • Fix NNAPI delegate failure when an operand for Maximum/Minimum operation is a scalar.
      • Fix NNAPI delegate failure when Axis input for reduce operation is a scalar.
      • Expose option to limit the number of partitions that will be delegated to NNAPI.
      • If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
    • tf.random:
      • Various random number generation improvements:
      • Add a fast path for default random_uniform
      • random_seed documentation improvement.
      • RandomBinomial broadcasts and appends the sample shape to the left rather than the right.
      • Added tf.random.stateless_binomial, tf.random.stateless_gamma, tf.random.stateless_poisson
      • tf.random.stateless_uniform now supports unbounded sampling of int types.
    • Math and Linear Algebra:
      • Add tf.linalg.LinearOperatorTridiag.
      • Add LinearOperatorBlockLowerTriangular
      • Add broadcasting support to tf.linalg.triangular_solve#26204, tf.math.invert_permutation.
      • Add tf.math.sobol_sample op.
      • Add tf.math.xlog1py.
      • Add tf.math.special.{dawsn,expi,fresnel_cos,fresnel_sin,spence}.
      • Add a Modified Discrete Cosine Transform (MDCT) and its inverse to tf.signal.
    • TPU Enhancements:
      • Refactor TpuClusterResolver to move shared logic to a separate pip package.
      • Support configuring TPU software version from cloud tpu client.
      • Allowed TPU embedding weight decay factor to be multiplied by learning rate.
    • 👍 XLA Support:
      • Add standalone XLA AOT runtime target + relevant .cc sources to pip package.
      • Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
      • saved_model_cli aot_compile_cpu allows you to compile saved models to XLA header+object files and include them in your C++ programs.
      • Enable Igamma, Igammac for XLA.
      • XLA reduction emitter is deterministic when the environment variable TF_DETERMINISTIC_OPS is set.
    • Tracing and Debugging:
      • Add source, destination name to _send traceme to allow easier debugging.
      • Add traceme event to fastpathexecute.
    • Other:
      • Fix an issue with AUC.reset_states for multi-label AUC #35852
      • Fix the TF upgrade script to not delete files when there is a parsing error and the output mode is in-place.
      • Move tensorflow/core:framework/*_pyclif rules to tensorflow/core/framework:*_pyclif.

    Thanks to our Contributors

    🚀 This release contains contributions from many people at Google, as well as:

    👕 372046933, 8bitmp3, aaronhma, Abin Shahab, Aditya Patwardhan, Agoniii, Ahti Kitsik, Alan Yee, Albin Joy, Alex Hoffman, Alexander Grund, Alexandre E. Eichenberger, Amit Kumar Jaiswal, amoitra, Andrew Anderson, Angus-Luo, Anthony Barbier, Anton Kachatkou, Anuj Rawat, archis, Arpan-Dhatt, Arvind Sundararajan, Ashutosh Hathidara, autoih, Bairen Yi, Balint Cristian, Bas Aarts, BashirSbaiti, Basit Ayantunde, Ben Barsdell, Benjamin Gaillard, boron, Brett Koonce, Bryan Cutler, Christian Goll, Christian Sachs, Clayne Robison, comet, Daniel Falbel, Daria Zhuravleva, darsh8200, David Truby, Dayananda-V, deepakm, Denis Khalikov, Devansh Singh, Dheeraj R Reddy, Diederik Van Liere, Diego Caballero, Dominic Jack, dothinking, Douman, Drake Gens, Duncan Riach, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, elzino, Ending2015a, Eric Schweitz, Erik Zettel, Ethan Saadia, Eugene Kuznetsov, Evgeniy Zheltonozhskiy, Ewout Ter Hoeven, exfalso, FAIJUL, Fangjun Kuang, Fei Hu, Frank Laub, Frederic Bastien, Fredrik Knutsson, frreiss, Frédéric Rechtenstein, fsx950223, Gaurav Singh, gbaned, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, Hans Gaiser, Hans Pabst, Haoyu Wu, Harry Slatyer, hsahovic, Hugo, Hugo Sjöberg, IrinaM21, jacco, Jake Tae, Jean-Denis Lesage, Jean-Michel Gorius, Jeff Daily, Jens Elofsson, Jerry Shih, jerryyin, Jin Mingjian, Jinjing Zhou, JKIsaacLee, jojimonv, Jonathan Dekhtiar, Jose Ignacio Gomez, Joseph-Rance, Judd, Julian Gross, Kaixi Hou, Kaustubh Maske Patil, Keunwoo Choi, Kevin Hanselman, Khor Chean Wei, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koki Ibukuro, Kristian Holsheimer, kurileo, Lakshay Tokas, Lee Netherton, leike666666, Leslie-Fang-Intel, Li, Guizi, LIUJIAN435, Lukas Geiger, Lyo Nguyen, madisetti, Maher Jendoubi, Mahmoud Abuzaina, Manuel Freiberger, Marcel Koester, Marco Jacopo Ferrarotti, Markus Franke, marload, Mbah-Javis, mbhuiyan, Meng Zhang, Michael Liao, MichaelKonobeev, Michal Tarnowski, Milan Straka, minoring, Mohamed Nour Abouelseoud, MoussaMM, Mrinal Jain, mrTsjolder, Måns Nilsson, Namrata Bhave, Nicholas Gao, Niels Ole Salscheider, nikochiko, Niranjan Hasabnis, Nishidha Panpaliya, nmostafa, Noah Trenaman, nuka137, Officium, Owen L - Sfe, Pallavi G, Paul Andrey, Peng Sun, Peng Wu, Phil Pearl, PhilipMay, pingsutw, Pooya Davoodi, PragmaTwice, pshiko, Qwerty71, R Gomathi, Rahul Huilgol, Richard Xiao, Rick Wierenga, Roberto Rosmaninho, ruchit2801, Rushabh Vasani, Sami, Sana Damani, Sarvesh Dubey, Sasan Jafarnejad, Sergii Khomenko, Shane Smiskol, Shaochen Shi, sharkdtu, Shawn Presser, ShengYang1, Shreyash Patodia, Shyam Sundar Dhanabalan, Siju Samuel, Somyajit Chakraborty Sam, Srihari Humbarwadi, srinivasan.narayanamoorthy, Srishti Yadav, Steph-En-M, Stephan Uphoff, Stephen Mugisha, SumanSudhir, Taehun Kim, Tamas Bela Feher, TengLu, Tetragramm, Thierry Herrmann, Tian Jin, tigertang, Tom Carchrae, Tom Forbes, Trent Lo, Victor Peng, vijayphoenix, Vincent Abriou, Vishal Bhola, Vishnuvardhan Janapati, vladbataev, VoVAllen, Wallyss Lima, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, William Zhang, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, Yasir Modak, Yasuhiro Matsumoto, Yaxun (Sam) Liu, Yong Tang, Ytyt-Yt, yuan, Yuan Mingshuai, Yuan Tang, Yuki Ueda, Yusup, zhangshijin, zhuwenxi

  • v2.2.0-rc1 Changes

    March 19, 2020

    🚀 Release 2.2.0

    Major Features and Improvements

    • Replaced the scalar type for string tensors from std::string to tensorflow::tstring which is now ABI stable.
    • A new Profiler for TF 2 for CPU/GPU/TPU. It offers both device and host performance analysis, including input pipeline and TF Ops. Optimization advisory is provided whenever possible. Please see this tutorial and guide for usage guidelines.
    • 🗄 Export C++ functions to Python using pybind11 as opposed to SWIG as a part of our deprecation of swig efforts.
    • tf.distribute:
      • Support added for global sync BatchNormalization by using the newly added tf.keras.layers.experimental.SyncBatchNormalization layer. This layer will sync BatchNormalization statistics every step across all replicas taking part in sync training.
      • Performance improvements for GPU multi-worker distributed training using tf.distribute.experimental.MultiWorkerMirroredStrategy
      • Update NVIDIA NCCL to 2.5.7-1 for better performance and performance tuning. Please see nccl developer guide for more information on this.
      • Support gradient allreduce in float16. See this example usage.
      • Experimental support of all reduce gradient packing to allow overlapping gradient aggregation with backward path computation.
    • tf.keras:
      • Model.fit major improvements:
      • You can now use custom training logic with Model.fit by overriding Model.train_step.
      • Easily write state-of-the-art training loops without worrying about all of the features Model.fit handles for you (distribution strategies, callbacks, data formats, looping logic, etc)
      • See the default Model.train_step for an example of what this function should look like
      • Same applies for validation and inference via Model.test_step and Model.predict_step
      • The SavedModel format now supports all Keras built-in layers (including metrics, preprocessing layers, and stateful RNN layers)
    • tf.lite:
      • Enable TFLite experimental new converter by default.
    • XLA
      • XLA now builds and works on windows. All prebuilt packages come with XLA available.
      • XLA can be enabled for a tf.function with “compile or throw exception” semantics on CPU and GPU.

    💥 Breaking Changes

    • tf.keras:
      • In tf.keras.applications the name of the "top" layer has been standardized to "predictions". This is only a problem if your code relies on the exact name of the layer.
      • Huber loss function has been updated to be consistent with other Keras losses. It now computes mean over the last axis of per-sample losses before applying the reduction function.
    • AutoGraph no longer converts functions passed to tf.py_function, tf.py_func and tf.numpy_function.
    • Deprecating XLA_CPU and XLA_GPU devices with this release.
    • Increasing the minimum bazel version to build TF to 2.0.0 to use Bazel's cc_experimental_shared_library.

    Known Caveats

    • MacOS binaries are not available on pypi at tensorflow-cpu project, but they are identical to the binaries in tensorflow project, since MacOS has no GPU.
    • 🚀 Due to certain unforeseen circumstances, we are unable to release MacOS py3.8 binaries, but Windows/Linux binaries for py3.8 are available.
    • The current TensorFlow release now requires gast version 0.3.3.

    🐛 Bug Fixes and Other Changes

    • tf.data:
      • Removed autotune_algorithm from experimental optimization options.
    • TF Core:
      • tf.constant always creates CPU tensors irrespective of the current device context.
      • Eager TensorHandles maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution.
      • For tf.Tensor & tf.Variable, .experimental_ref() is no longer experimental and is available as simply .ref().
      • pfor/vectorized_map: Added support for vectorizing 56 more ops. Vectorizing tf.cond is also supported now.
      • Set as much partial shape as we can infer statically within the gradient impl of the gather op.
      • Gradient of tf.while_loop emits StatelessWhile op if cond and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy.
      • Speed up GradientTape in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions.
      • Support back_prop=False in while_v2 but mark it as deprecated.
      • Improve error message when attempting to use None in data-dependent control flow.
      • Add RaggedTensor.numpy().
      • Update RaggedTensor. __getitem__ to preserve uniform dimensions & allow indexing into uniform dimensions.
      • Update tf.expand_dims to always insert the new dimension as a non-ragged dimension.
      • Update tf.embedding_lookup to use partition_strategy and max_norm when ids is ragged.
      • Allow batch_dims==rank(indices) in tf.gather.
      • Add support for bfloat16 in tf.print.
    • tf.distribute:
      • Support embedding_column with variable-length input features for MultiWorkerMirroredStrategy.
    • tf.keras:
      • Added all_reduce_sum_gradients argument to tf.keras.optimizer.Optimizer.apply_gradients. This allows custom gradient aggregation and processing aggregated gradients in custom training loop.
      • Allow pathlib.Path paths for loading models via Keras API.
    • tf.function/AutoGraph:
      • AutoGraph is now available in ReplicaContext.merge_call, Strategy.extended.update and Strategy.extended.update_non_slot.
      • Experimental support for shape invariants has been enabled in tf.function. See the API docs for tf.autograph.experimental.set_loop_options for additonal info.
      • AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
      • Improve shape inference for tf.function input arguments to unlock more Grappler optimizations in TensorFlow 2.x.
      • Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
      • Fix execution order of multiple stateful calls to experimental_run_v2 in tf.function.
      • You can now iterate over RaggedTensors using a for loop inside tf.function.
    • tf.lite:
      • Migrated the tf.lite C inference API out of experimental into lite/c.
      • Add an option to disallow NNAPI CPU / partial acceleration on Android 10
      • TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
      • Refactors the delegate and delegate kernel sources to allow usage in the linter.
      • Limit delegated ops to actually supported ones if a device name is specified or NNAPI CPU Fallback is disabled.
      • TFLite now supports tf.math.reciprocal1 op by lowering to tf.div op.
      • TFLite's unpack op now supports boolean tensor inputs.
      • Microcontroller and embedded code moved from experimental to main TFLite folder
      • Check for large TFLite tensors.
      • Fix GPU delegate crash with C++17.
      • Add 5D support to TFLite strided_slice.
      • Fix error in delegation of DEPTH_TO_SPACE to NNAPI causing op not to be accelerated.
      • Fix segmentation fault when running a model with LSTM nodes using NNAPI Delegate
      • Fix NNAPI delegate failure when an operand for Maximum/Minimum operation is a scalar.
      • Fix NNAPI delegate failure when Axis input for reduce operation is a scalar.
      • Expose option to limit the number of partitions that will be delegated to NNAPI.
      • If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
    • tf.random:
      • Various random number generation improvements:
      • Add a fast path for default random_uniform
      • random_seed documentation improvement.
      • RandomBinomial broadcasts and appends the sample shape to the left rather than the right.
      • Added tf.random.stateless_binomial, tf.random.stateless_gamma, tf.random.stateless_poisson
      • tf.random.stateless_uniform now supports unbounded sampling of int types.
    • Math and Linear Algebra:
      • Add tf.linalg.LinearOperatorTridiag.
      • Add LinearOperatorBlockLowerTriangular
      • Add broadcasting support to tf.linalg.triangular_solve#26204, tf.math.invert_permutation.
      • Add tf.math.sobol_sample op.
      • Add tf.math.xlog1py.
      • Add tf.math.special.{dawsn,expi,fresnel_cos,fresnel_sin,spence}.
      • Add a Modified Discrete Cosine Transform (MDCT) and its inverse to tf.signal.
    • TPU Enhancements:
      • Refactor TpuClusterResolver to move shared logic to a separate pip package.
      • Support configuring TPU software version from cloud tpu client.
      • Allowed TPU embedding weight decay factor to be multiplied by learning rate.
    • 👍 XLA Support:
      • Add standalone XLA AOT runtime target + relevant .cc sources to pip package.
      • Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
      • saved_model_cli aot_compile_cpu allows you to compile saved models to XLA header+object files and include them in your C++ programs.
      • Enable Igamma, Igammac for XLA.
      • XLA reduction emitter is deterministic when the environment variable TF_DETERMINISTIC_OPS is set.
    • Tracing and Debugging:
      • Add source, destination name to _send traceme to allow easier debugging.
      • Add traceme event to fastpathexecute.
    • Other:
      • Fix an issue with AUC.reset_states for multi-label AUC #35852
      • Fix the TF upgrade script to not delete files when there is a parsing error and the output mode is in-place.
      • Move tensorflow/core:framework/*_pyclif rules to tensorflow/core/framework:*_pyclif.

    Thanks to our Contributors

    🚀 This release contains contributions from many people at Google, as well as:

    👕 372046933, 8bitmp3, aaronhma, Abin Shahab, Aditya Patwardhan, Agoniii, Ahti Kitsik, Alan Yee, Albin Joy, Alex Hoffman, Alexander Grund, Alexandre E. Eichenberger, Amit Kumar Jaiswal, amoitra, Andrew Anderson, Angus-Luo, Anthony Barbier, Anton Kachatkou, Anuj Rawat, archis, Arpan-Dhatt, Arvind Sundararajan, Ashutosh Hathidara, autoih, Bairen Yi, Balint Cristian, Bas Aarts, BashirSbaiti, Basit Ayantunde, Ben Barsdell, Benjamin Gaillard, boron, Brett Koonce, Bryan Cutler, Christian Goll, Christian Sachs, Clayne Robison, comet, Daniel Falbel, Daria Zhuravleva, darsh8200, David Truby, Dayananda-V, deepakm, Denis Khalikov, Devansh Singh, Dheeraj R Reddy, Diederik Van Liere, Diego Caballero, Dominic Jack, dothinking, Douman, Drake Gens, Duncan Riach, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, elzino, Ending2015a, Eric Schweitz, Erik Zettel, Ethan Saadia, Eugene Kuznetsov, Evgeniy Zheltonozhskiy, Ewout Ter Hoeven, exfalso, FAIJUL, Fangjun Kuang, Fei Hu, Frank Laub, Frederic Bastien, Fredrik Knutsson, frreiss, Frédéric Rechtenstein, fsx950223, Gaurav Singh, gbaned, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, Hans Gaiser, Hans Pabst, Haoyu Wu, Harry Slatyer, hsahovic, Hugo, Hugo Sjöberg, IrinaM21, jacco, Jake Tae, Jean-Denis Lesage, Jean-Michel Gorius, Jeff Daily, Jens Elofsson, Jerry Shih, jerryyin, Jin Mingjian, Jinjing Zhou, JKIsaacLee, jojimonv, Jonathan Dekhtiar, Jose Ignacio Gomez, Joseph-Rance, Judd, Julian Gross, Kaixi Hou, Kaustubh Maske Patil, Keunwoo Choi, Kevin Hanselman, Khor Chean Wei, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koki Ibukuro, Kristian Holsheimer, kurileo, Lakshay Tokas, Lee Netherton, leike666666, Leslie-Fang-Intel, Li, Guizi, LIUJIAN435, Lukas Geiger, Lyo Nguyen, madisetti, Maher Jendoubi, Mahmoud Abuzaina, Manuel Freiberger, Marcel Koester, Marco Jacopo Ferrarotti, Markus Franke, marload, Mbah-Javis, mbhuiyan, Meng Zhang, Michael Liao, MichaelKonobeev, Michal Tarnowski, Milan Straka, minoring, Mohamed Nour Abouelseoud, MoussaMM, Mrinal Jain, mrTsjolder, Måns Nilsson, Namrata Bhave, Nicholas Gao, Niels Ole Salscheider, nikochiko, Niranjan Hasabnis, Nishidha Panpaliya, nmostafa, Noah Trenaman, nuka137, Officium, Owen L - Sfe, Pallavi G, Paul Andrey, Peng Sun, Peng Wu, Phil Pearl, PhilipMay, pingsutw, Pooya Davoodi, PragmaTwice, pshiko, Qwerty71, R Gomathi, Rahul Huilgol, Richard Xiao, Rick Wierenga, Roberto Rosmaninho, ruchit2801, Rushabh Vasani, Sami, Sana Damani, Sarvesh Dubey, Sasan Jafarnejad, Sergii Khomenko, Shane Smiskol, Shaochen Shi, sharkdtu, Shawn Presser, ShengYang1, Shreyash Patodia, Shyam Sundar Dhanabalan, Siju Samuel, Somyajit Chakraborty Sam, Srihari Humbarwadi, srinivasan.narayanamoorthy, Srishti Yadav, Steph-En-M, Stephan Uphoff, Stephen Mugisha, SumanSudhir, Taehun Kim, Tamas Bela Feher, TengLu, Tetragramm, Thierry Herrmann, Tian Jin, tigertang, Tom Carchrae, Tom Forbes, Trent Lo, Victor Peng, vijayphoenix, Vincent Abriou, Vishal Bhola, Vishnuvardhan Janapati, vladbataev, VoVAllen, Wallyss Lima, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, William Zhang, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, Yasir Modak, Yasuhiro Matsumoto, Yaxun (Sam) Liu, Yong Tang, Ytyt-Yt, yuan, Yuan Mingshuai, Yuan Tang, Yuki Ueda, Yusup, zhangshijin, zhuwenxi

  • v2.2.0-rc0 Changes

    March 10, 2020

    🚀 Release 2.2.0

    Major Features and Improvements

    • Replaced the scalar type for string tensors from std::string to tensorflow::tstring which is now ABI stable.
    • A new Profiler for TF 2 for CPU/GPU/TPU. It offers both device and host performance analysis, including input pipeline and TF Ops. Optimization advisory is provided whenever possible. Please see this tutorial for usage guidelines.
    • 🗄 Export C++ functions to Python using pybind11 as opposed to SWIG as a part of our deprecation of swig efforts.
    • tf.distribute:
      • Support added for global sync BatchNormalization by using the newly added tf.keras.layers.experimental.SyncBatchNormalization layer. This layer will sync BatchNormalization statistics every step across all replicas taking part in sync training.
      • Performance improvements for GPU multi-worker distributed training using tf.distribute.experimental.MultiWorkerMirroredStrategy
      • Update NVIDIA NCCL to 2.5.7-1 for better performance and performance tuning. Please see nccl developer guide for more information on this.
      • Support gradient allreduce in float16. See this example usage.
      • Experimental support of all reduce gradient packing to allow overlapping gradient aggregation with backward path computation.
    • tf.keras:
      • Model.fit major improvements:
      • You can now use custom training logic with Model.fit by overriding Model.train_step.
      • Easily write state-of-the-art training loops without worrying about all of the features Model.fit handles for you (distribution strategies, callbacks, data formats, looping logic, etc)
      • See the default Model.train_step for an example of what this function should look like
      • Same applies for validation and inference via Model.test_step and Model.predict_step
      • The SavedModel format now supports all Keras built-in layers (including metrics, preprocessing layers, and stateful RNN layers)
    • tf.lite:
      • Enable TFLite experimental new converter by default.
    • XLA
      • XLA now builds and works on windows. All prebuilt packages come with XLA available.
      • XLA can be enabled for a tf.function with “compile or throw exception” semantics on CPU and GPU.

    💥 Breaking Changes

    • tf.keras:
      • In tf.keras.applications the name of the "top" layer has been standardized to "predictions". This is only a problem if your code relies on the exact name of the layer.
      • Huber loss function has been updated to be consistent with other Keras losses. It now computes mean over the last axis of per-sample losses before applying the reduction function.
    • AutoGraph no longer converts functions passed to tf.py_function, tf.py_func and tf.numpy_function.
    • Deprecating XLA_CPU and XLA_GPU devices with this release.
    • Increasing the minimum bazel version to build TF to 1.2.1 to use Bazel's cc_experimental_shared_library.

    Known Caveats

    • MacOS binaries are not available on pypi at tensorflow-cpu project, but they are identical to the binaries in tensorflow project, since MacOS has no GPU.

    🐛 Bug Fixes and Other Changes

    • tf.data:
      • Removed autotune_algorithm from experimental optimization options.
    • TF Core:
      • tf.constant always creates CPU tensors irrespective of the current device context.
      • Eager TensorHandles maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution.
      • For tf.Tensor & tf.Variable, .experimental_ref() is no longer experimental and is available as simply .ref().
      • Support matrix inverse and solves in pfor/vectorized_map.
      • Set as much partial shape as we can infer statically within the gradient impl of the gather op.
      • Gradient of tf.while_loop emits StatelessWhile op if cond and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy.
      • Speed up GradientTape in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions.
      • Support back_prop=False in while_v2 but mark it as deprecated.
      • Improve error message when attempting to use None in data-dependent control flow.
      • Add RaggedTensor.numpy().
      • Update RaggedTensor. __getitem__ to preserve uniform dimensions & allow indexing into uniform dimensions.
      • Update tf.expand_dims to always insert the new dimension as a non-ragged dimension.
      • Update tf.embedding_lookup to use partition_strategy and max_norm when ids is ragged.
      • Allow batch_dims==rank(indices) in tf.gather.
      • Add support for bfloat16 in tf.print.
    • tf.distribute:
      • Support embedding_column with variable-length input features for MultiWorkerMirroredStrategy.
    • tf.keras:
      • Added all_reduce_sum_gradients argument to tf.keras.optimizer.Optimizer.apply_gradients. This allows custom gradient aggregation and processing aggregated gradients in custom training loop.
      • Allow pathlib.Path paths for loading models via Keras API.
    • tf.function/AutoGraph:
      • AutoGraph is now available in ReplicaContext.merge_call, Strategy.extended.update and Strategy.extended.update_non_slot.
      • Experimental support for shape invariants has been enabled in tf.function. See the API docs for tf.autograph.experimental.set_loop_options for additonal info.
      • AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
      • Improve shape inference for tf.function input arguments to unlock more Grappler optimizations in TensorFlow 2.x.
      • Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
      • Fix execution order of multiple stateful calls to experimental_run_v2 in tf.function.
      • You can now iterate over RaggedTensors using a for loop inside tf.function.
    • tf.lite:
      • Migrated the tf.lite C inference API out of experimental into lite/c.
      • Add an option to disallow NNAPI CPU / partial acceleration on Android 10
      • TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
      • Refactors the delegate and delegate kernel sources to allow usage in the linter.
      • Limit delegated ops to actually supported ones if a device name is specified or NNAPI CPU Fallback is disabled.
      • TFLite now supports tf.math.reciprocal1 op by lowering to tf.div op.
      • TFLite's unpack op now supports boolean tensor inputs.
      • Microcontroller and embedded code moved from experimental to main TensorFlow Lite folder
      • Check for large TFLite tensors.
      • Fix GPU delegate crash with C++17.
      • Add 5D support to TFLite strided_slice.
      • Fix error in delegation of DEPTH_TO_SPACE to NNAPI causing op not to be accelerated.
      • Fix segmentation fault when running a model with LSTM nodes using NNAPI Delegate
      • Fix NNAPI delegate failure when an operand for Maximum/Minimum operation is a scalar.
      • Fix NNAPI delegate failure when Axis input for reduce operation is a scalar.
      • Expose option to limit the number of partitions that will be delegated to NNAPI.
      • If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
    • tf.random:
      • Various random number generation improvements:
      • Add a fast path for default random_uniform
      • random_seed documentation improvement.
      • RandomBinomial broadcasts and appends the sample shape to the left rather than the right.
      • Added tf.random.stateless_binomial, tf.random.stateless_gamma, tf.random.stateless_poisson
      • tf.random.stateless_uniform now supports unbounded sampling of int types.
    • Math and Linear Algebra:
      • Add tf.linalg.LinearOperatorTridiag.
      • Add LinearOperatorBlockLowerTriangular
      • Add broadcasting support to tf.linalg.triangular_solve#26204, tf.math.invert_permutation.
      • Add tf.math.sobol_sample op.
      • Add tf.math.xlog1py.
      • Add tf.math.special.{dawsn,expi,fresnel_cos,fresnel_sin,spence}.
      • Add a Modified Discrete Cosine Transform (MDCT) and its inverse to tf.signal.
    • TPU Enhancements:
      • Refactor TpuClusterResolver to move shared logic to a separate pip package.
      • Support configuring TPU software version from cloud tpu client.
      • Allowed TPU embedding weight decay factor to be multiplied by learning rate.
    • 👍 XLA Support:
      • Add standalone XLA AOT runtime target + relevant .cc sources to pip package.
      • Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
      • saved_model_cli aot_compile_cpu allows you to compile saved models to XLA header+object files and include them in your C++ programs.
      • Enable Igamma, Igammac for XLA.
      • XLA reduction emitter is deterministic when the environment variable TF_DETERMINISTIC_OPS is set.
    • Tracing and Debugging:
      • Add source, destination name to _send traceme to allow easier debugging.
      • Add traceme event to fastpathexecute.
    • Other:
      • Fix an issue with AUC.reset_states for multi-label AUC #35852
      • Fix the TF upgrade script to not delete files when there is a parsing error and the output mode is in-place.
      • Move tensorflow/core:framework/*_pyclif rules to tensorflow/core/framework:*_pyclif.

    Thanks to our Contributors

    🚀 This release contains contributions from many people at Google, as well as:

    👕 372046933, 8bitmp3, aaronhma, Abin Shahab, Aditya Patwardhan, Agoniii, Ahti Kitsik, Alan Yee, Albin Joy, Alex Hoffman, Alexander Grund, Alexandre E. Eichenberger, Amit Kumar Jaiswal, amoitra, Andrew Anderson, Angus-Luo, Anthony Barbier, Anton Kachatkou, Anuj Rawat, archis, Arpan-Dhatt, Arvind Sundararajan, Ashutosh Hathidara, autoih, Bairen Yi, Balint Cristian, Bas Aarts, BashirSbaiti, Basit Ayantunde, Ben Barsdell, Benjamin Gaillard, boron, Brett Koonce, Bryan Cutler, Christian Goll, Christian Sachs, Clayne Robison, comet, Daniel Falbel, Daria Zhuravleva, darsh8200, David Truby, Dayananda-V, deepakm, Denis Khalikov, Devansh Singh, Dheeraj R Reddy, Diederik Van Liere, Diego Caballero, Dominic Jack, dothinking, Douman, Drake Gens, Duncan Riach, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, elzino, Ending2015a, Eric Schweitz, Erik Zettel, Ethan Saadia, Eugene Kuznetsov, Evgeniy Zheltonozhskiy, Ewout Ter Hoeven, exfalso, FAIJUL, Fangjun Kuang, Fei Hu, Frank Laub, Frederic Bastien, Fredrik Knutsson, frreiss, Frédéric Rechtenstein, fsx950223, Gaurav Singh, gbaned, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, Hans Gaiser, Hans Pabst, Haoyu Wu, Harry Slatyer, hsahovic, Hugo, Hugo Sjöberg, IrinaM21, jacco, Jake Tae, Jean-Denis Lesage, Jean-Michel Gorius, Jeff Daily, Jens Elofsson, Jerry Shih, jerryyin, Jin Mingjian, Jinjing Zhou, JKIsaacLee, jojimonv, Jonathan Dekhtiar, Jose Ignacio Gomez, Joseph-Rance, Judd, Julian Gross, Kaixi Hou, Kaustubh Maske Patil, Keunwoo Choi, Kevin Hanselman, Khor Chean Wei, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koki Ibukuro, Kristian Holsheimer, kurileo, Lakshay Tokas, Lee Netherton, leike666666, Leslie-Fang-Intel, Li, Guizi, LIUJIAN435, Lukas Geiger, Lyo Nguyen, madisetti, Maher Jendoubi, Mahmoud Abuzaina, Manuel Freiberger, Marcel Koester, Marco Jacopo Ferrarotti, Markus Franke, marload, Mbah-Javis, mbhuiyan, Meng Zhang, Michael Liao, MichaelKonobeev, Michal Tarnowski, Milan Straka, minoring, Mohamed Nour Abouelseoud, MoussaMM, Mrinal Jain, mrTsjolder, Måns Nilsson, Namrata Bhave, Nicholas Gao, Niels Ole Salscheider, nikochiko, Niranjan Hasabnis, Nishidha Panpaliya, nmostafa, Noah Trenaman, nuka137, Officium, Owen L - Sfe, Pallavi G, Paul Andrey, Peng Sun, Peng Wu, Phil Pearl, PhilipMay, pingsutw, Pooya Davoodi, PragmaTwice, pshiko, Qwerty71, R Gomathi, Rahul Huilgol, Richard Xiao, Rick Wierenga, Roberto Rosmaninho, ruchit2801, Rushabh Vasani, Sami, Sana Damani, Sarvesh Dubey, Sasan Jafarnejad, Sergii Khomenko, Shane Smiskol, Shaochen Shi, sharkdtu, Shawn Presser, ShengYang1, Shreyash Patodia, Shyam Sundar Dhanabalan, Siju Samuel, Somyajit Chakraborty Sam, Srihari Humbarwadi, srinivasan.narayanamoorthy, Srishti Yadav, Steph-En-M, Stephan Uphoff, Stephen Mugisha, SumanSudhir, Taehun Kim, Tamas Bela Feher, TengLu, Tetragramm, Thierry Herrmann, Tian Jin, tigertang, Tom Carchrae, Tom Forbes, Trent Lo, Victor Peng, vijayphoenix, Vincent Abriou, Vishal Bhola, Vishnuvardhan Janapati, vladbataev, VoVAllen, Wallyss Lima, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, William Zhang, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, Yasir Modak, Yasuhiro Matsumoto, Yaxun (Sam) Liu, Yong Tang, Ytyt-Yt, yuan, Yuan Mingshuai, Yuan Tang, Yuki Ueda, Yusup, zhangshijin, zhuwenxi

  • v2.1.2 Changes

    September 24, 2020

    🚀 Release 2.1.2

    🐛 Bug Fixes and Other Changes

  • v2.1.1 Changes

    May 18, 2020

    🚀 Release 2.1.1

    🐛 Bug Fixes and Other Changes

  • v2.1.0 Changes

    January 08, 2020

    🚀 Release 2.1.0

    🚀 TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.

    Major Features and Improvements

    • 🐧 The tensorflow pip package now includes GPU support by default (same as tensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. tensorflow-gpu is still available, and CPU-only packages can be downloaded at tensorflow-cpu for users who are concerned about package size.
    • 🏁 Windows users: Officially-released tensorflow Pip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new /d2ReducedOptimizeHugeFunctions compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website here.
      • This does not change the minimum required version for building TensorFlow from source on Windows, but builds enabling EIGEN_STRONG_INLINE can take over 48 hours to compile without this flag. Refer to configure.py for more information about EIGEN_STRONG_INLINE and /d2ReducedOptimizeHugeFunctions.
      • If either of the required DLLs, msvcp140.dll (old) or msvcp140_1.dll (new), are missing on your machine, import tensorflow will print a warning message.
    • 📦 The tensorflow pip package is built with CUDA 10.1 and cuDNN 7.6.
    • tf.keras
      • Experimental support for mixed precision is available on GPUs and Cloud TPUs. See usage guide.
      • Introduced the TextVectorization layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example.
      • Keras .compile .fit .evaluate and .predict are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope.
      • Experimental support for Keras .compile, .fit, .evaluate, and .predict is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models).
      • Automatic outside compilation is now enabled for Cloud TPUs. This allows tf.summary to be used more conveniently with Cloud TPUs.
      • Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
      • Support for .fit, .evaluate, .predict on TPU using numpy data, in addition to tf.data.Dataset.
      • Keras reference implementations for many popular models are available in the TensorFlow Model Garden.
    • tf.data
      • Changes rebatching for tf.data datasets + DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas.
      • tf.data.Dataset now supports automatic data distribution and sharding in distributed environments, including on TPU pods.
      • Distribution policies for tf.data.Dataset can now be tuned with 1. tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA) 2. tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)
    • tf.debugging
      • Add tf.debugging.enable_check_numerics() and tf.debugging.disable_check_numerics() to help debugging the root causes of issues involving infinities and NaNs.
    • tf.distribute
      • Custom training loop support on TPUs and TPU pods is avaiable through strategy.experimental_distribute_dataset, strategy.experimental_distribute_datasets_from_function, strategy.experimental_run_v2, strategy.reduce.
      • Support for a global distribution strategy through tf.distribute.experimental_set_strategy(), in addition to strategy.scope().
    • TensorRT
      • TensorRT 6.0 is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as tf.experimental.tensorrt.Converter.
    • Environment variable TF_DETERMINISTIC_OPS has been added. When set to "true" or "1", this environment variable makes tf.nn.bias_add operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. Setting TF_DETERMINISTIC_OPS to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.

    💥 Breaking Changes

    • Deletes Operation.traceback_with_start_lines for which we know of no usages.
    • Removed id from tf.Tensor. __repr__ () as id is not useful other than internal debugging.
    • Some tf.assert_* methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during the session.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in feed_dict argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).
    • The following APIs are not longer experimental: tf.config.list_logical_devices, tf.config.list_physical_devices, tf.config.get_visible_devices, tf.config.set_visible_devices, tf.config.get_logical_device_configuration, tf.config.set_logical_device_configuration.
    • tf.config.experimentalVirtualDeviceConfiguration has been renamed to tf.config.LogicalDeviceConfiguration.
    • tf.config.experimental_list_devices has been removed, please use
      tf.config.list_logical_devices.

    🐛 Bug Fixes and Other Changes

    • tf.data
      • Fixes concurrency issue with tf.data.experimental.parallel_interleave with sloppy=True.
      • Add tf.data.experimental.dense_to_ragged_batch().
      • Extend tf.data parsing ops to support RaggedTensors.
    • tf.distribute
      • Fix issue where GRU would crash or give incorrect output when a tf.distribute.Strategy was used.
    • tf.estimator
      • Added option in tf.estimator.CheckpointSaverHook to not save the GraphDef.
      • Moving the checkpoint reader from swig to pybind11.
    • tf.keras
      • Export depthwise_conv2d in tf.keras.backend.
      • In Keras Layers and Models, Variables in trainable_weights, non_trainable_weights, and weights are explicitly deduplicated.
      • Keras model.load_weights now accepts skip_mismatch as an argument. This was available in external Keras, and has now been copied over to tf.keras.
      • Fix the input shape caching behavior of Keras convolutional layers.
      • Model.fit_generator, Model.evaluate_generator, Model.predict_generator, Model.train_on_batch, Model.test_on_batch, and Model.predict_on_batch methods now respect the run_eagerly property, and will correctly run using tf.function by default. Note that Model.fit_generator, Model.evaluate_generator, and Model.predict_generator are deprecated endpoints. They are subsumed by Model.fit, Model.evaluate, and Model.predict which now support generators and Sequences.
    • tf.lite
      • Legalization for NMS ops in TFLite.
      • add narrow_range and axis to quantize_v2 and dequantize ops.
      • Added support for FusedBatchNormV3 in converter.
      • Add an errno-like field to NNAPI delegate for detecting NNAPI errors for fallback behaviour.
      • Refactors NNAPI Delegate to support detailed reason why an operation is not accelerated.
      • Converts hardswish subgraphs into atomic ops.
    • Other
      • Critical stability updates for TPUs, especially in cases where the XLA compiler produces compilation errors.
      • TPUs can now be re-initialized multiple times, using tf.tpu.experimental.initialize_tpu_system.
      • Add RaggedTensor.merge_dims().
      • Added new uniform_row_length row-partitioning tensor to RaggedTensor.
      • Add shape arg to RaggedTensor.to_tensor; Improve speed of RaggedTensor.to_tensor.
      • tf.io.parse_sequence_example and tf.io.parse_single_sequence_example now support ragged features.
      • Fix while_v2 with variables in custom gradient.
      • Support taking gradients of V2 tf.cond and tf.while_loop using LookupTable.
      • Fix bug where vectorized_map failed on inputs with unknown static shape.
      • Add preliminary support for sparse CSR matrices.
      • Tensor equality with None now behaves as expected.
      • Make calls to tf.function(f)(), tf.function(f).get_concrete_function and tf.function(f).get_initialization_function thread-safe.
      • Extend tf.identity to work with CompositeTensors (such as SparseTensor)
      • Added more dtypes and zero-sized inputs to Einsum Op and improved its performance
      • Enable multi-worker NCCL all-reduce inside functions executing eagerly.
      • Added complex128 support to RFFT, RFFT2D, RFFT3D, IRFFT, IRFFT2D, and IRFFT3D.
      • Add pfor converter for SelfAdjointEigV2.
      • Add tf.math.ndtri and tf.math.erfinv.
      • Add tf.config.experimental.enable_mlir_bridge to allow using MLIR compiler bridge in eager model.
      • Added support for MatrixSolve on Cloud TPU / XLA.
      • Added tf.autodiff.ForwardAccumulator for forward-mode autodiff
      • Add LinearOperatorPermutation.
      • A few performance optimizations on tf.reduce_logsumexp.
      • Added multilabel handling to AUC metric
      • Optimization on zeros_like.
      • Dimension constructor now requires None or types with an __index__ method.
      • Add tf.random.uniform microbenchmark.
      • Use _protogen suffix for proto library targets instead of _cc_protogen suffix.
      • Moving the checkpoint reader from swig to pybind11.
      • tf.device & MirroredStrategy now supports passing in a tf.config.LogicalDevice
      • If you're building Tensorflow from source, consider using bazelisk to automatically download and use the correct Bazel version. Bazelisk reads the .bazelversion file at the root of the project directory.

    Thanks to our Contributors

    🚀 This release contains contributions from many people at Google, as well as:

    8bitmp3, Aaron Ma, AbdüLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, 王振华 (Zhenhua Wang), 韩董, 이중건 Isaac Lee