mxnet v1.4.0 Release Notes

Release Date: 2019-02-16 // about 5 years ago
  • ๐ŸŒฒ MXNet Change Log

    1.4.0

    • ๐Ÿ†• New Features
      • Java Inference API
      • Julia API
      • Control Flow Operators (experimental)
      • MXNet Horovod Integration
      • SVRG Optimization
      • Subgraph API (experimental)
      • JVM Memory Management
      • Topology-aware AllReduce (experimental)
      • MKLDNN backend: Graph optimization and Quantization (experimental)
      • Graph Optimization
      • Quantization
    • ๐Ÿ†• New Operators
    • ๐Ÿ”‹ Feature improvements
      • Operator
      • Optimizer
      • Sparse
      • ONNX
      • MKLDNN
      • Inference
      • Other
    • โšก๏ธ Frontend API updates
      • Gluon
      • Symbol
    • โšก๏ธ Language API updates
      • Java
      • R
      • Scala
      • Clojure
      • Perl
      • Julia
    • ๐ŸŽ Performance benchmarks and improvements
    • ๐Ÿ› Bug fixes
    • โšก๏ธ Licensing updates
    • ๐Ÿ‘Œ Improvements
      • Tutorial
      • Example
      • Documentation
      • Website
      • MXNet Distributions
      • Installation
      • Build and CI
      • 3rd party
      • TVM:
      • CUDNN:
      • Horovod:
    • ๐Ÿ—„ Deprecations
    • Other
    • ๐Ÿ— How to build MXNet
    • โšก๏ธ List of submodules used by Apache MXNet (Incubating) and when they were updated last

    ๐Ÿ†• New Features

    Java Inference API

    ๐Ÿš€ Model inference is often managed in a production ecosystem using primarily Java/Scala tools and frameworks. This release seeks to alleviate the need for software engineers to write custom MXNet wrappers to fit their production environment.
    Inference on a trained model has a couple of common use cases:

    1. Real-time or Online Inference - tasks that require immediate feedback, such as fraud detection ๐Ÿš€ 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use cases where you have massive amounts of data and want to run inference or pre-compute inference results Real-time Inference is often performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc., all of which use Java. Batch Inference is often performed on big data platforms such as Spark using Scala or Java.

    With this project, we had the following goals:

    • ๐Ÿ— Build a new set of APIs that are Java friendly, compatible with Java 7+, are easy to use for inference.
    • Lower the barrier to entry of consuming MXNet for production use cases.
      More details can be found at the Java Inference API document.

    Julia API

    ๐Ÿ“ฆ MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlights of features include:

    • Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes.
    • Flexible manipulation of symbolic to composite for construction of state-of-the-art deep learning models.

    Control Flow Operators (experimental)

    Today we observe more and more dynamic neural network models, especially in the fields of natural language processing and graph analysis. The dynamics in these models come from multiple sources, including:

    • Models are expressed with control flow, such as conditions and loops.
    • NDArrays in a model may have dynamic shapes, meaning the NDArrays of a model or some of the NDArrays have different shapes for different batches.
    • Models may want to use more dynamic data structures, such as lists or dictionaries. It's natural to express dynamic models in frameworks with an imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this kind of interface, developers can use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly dependent on the originating front-end programming language (mainly Python). A model implemented in one language can only run in the same language.

    ๐Ÿš€ A common use case is that machine learning scientists want to develop their models in Python, whereas engineers who deploy the models usually have to use a different "production" language (e.g., Java or C). Gluon tries to close the gap between the model development and production deployment. Machine learning scientists design and implement their models in Python with the imperative interface, and then Gluon converts the implementations from imperative to symbolic by invoking hybridize() for model exporting.

    ๐Ÿš€ The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph. The dynamic control flows are expressed by control flow operators with Gluon hybridization, and these are exported for deployment.

    โšก๏ธ More information can be found at Optimize dynamic neural network models with control flow operators

    MXNet Horovod Integration

    ๐Ÿ‘ท Apache MXNet now supports distributed training using Horovod framework. Horovod is an open source distributed framework created at Uber. It leverages efficient inter-GPU communication to distribute and aggregate model parameters across multiple workers thus allowing efficient use of network bandwidth and scaling of training of deep learning models. To learn more about MXNet-Horovod integration, check out this blog.

    SVRG Optimization

    SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013. It is an optimization technique that complements SGD.

    SGD is known for large scale optimization, but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate.

    โšก๏ธ SVRG remedies the slow convergence problem by keeping a version of the estimated weights that is close to the optimal parameters and maintains the average of the full gradient over the full pass of data. The average of the full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions; a detailed proof can be found in section 3 of the paper. SVRG uses a different update rule than SGD: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data.

    Key Characteristics of SVRG:

    Subgraph API (experimental)

    ๐Ÿ‘ MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. In general, these backends support a limited number of operators, so running computation in a model usually involves an interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements:

    TVM , MKLDNN and nGraph use customized data formats. Interaction between these backends with MXNet requires data format conversion.
    TVM, MKLDNN, TensorRT and nGraph fuses operators.
    Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or nGraph operators. In this way, MXNet converts data formats only when entering such a subgraph, and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and Subgraph API.

    JVM Memory Management

    ๐Ÿ†“ The MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using MXNet's internal C APIs. The C APIs provide appropriate interfaces to create, access and free these objects. MXNet Scala has corresponding Wrappers and APIs that have pointer references to the native memory. Before this project, JVM users (e.g. Scala, Clojure, or Java) of MXNet have to manage MXNet objects manually using the dispose pattern. There are a few usability problems with this approach:

    • ๐Ÿ‘‰ Users have to track the MXNet objects manually and remember to call dispose. This is not Java idiomatic and not user friendly. Quoting a user: "this feels like I am writing C++ code which I stopped ages ago".
    • Leads to memory leaks if dispose is not called.
    • Many objects in MXNet-Scala are managed in native memory, needing to use dispose on them as well.
    • Bloated code with dispose() methods.
    • Hard to debug memory-leaks.
      Goals of the project are:
    • ๐Ÿš€ Provide MXNet JVM users automated memory management that can release native memory when there are no references to JVM objects.
    • ๐ŸŽ Provide automated memory management for both GPU and CPU memory without performance degradation. More details can be found here: JVM Memory Management

    Topology-aware AllReduce (experimental)

    For distributed training, the Reduce communication patterns used by NCCL and MXNet are not optimal for small batch sizes. The Topology-aware AllReduce approach is based on the idea of using trees to perform the Reduce and Broadcast operations. We can use the idea of minimum spanning trees to do a binary tree Reduce communication pattern to improve distributed training following this paper by Wang, Li, Edo and Smola [1]. Our strategy is to use:

    • ๐Ÿšค a single tree (latency-optimal for small messages) to handle Reduce on small messages
    • multiple trees (bandwidth-optimal for large messages) to handle Reduce on large messages

    More details can be found here: Topology-aware AllReduce
    ๐Ÿ‘€ Note: This is an experimental feature and has known problems - see 13341. Please help to contribute to improve the robustness of the feature.

    MKLDNN backend: Graph optimization and Quantization (experimental)

    ๐Ÿš€ Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release (#12530, #13297, #13260).
    ๐ŸŽ These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for inference on platforms with supported Intel CPUs.

    Graph Optimization

    MKLDNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN.

    Quantization

    Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, Inception-BN, MobileNet, etc.

    ๐ŸŽ Please find detailed information and performance/accuracy numbers here: MKLDNN README, quantization README and design proposal

    ๐Ÿ†• New Operators

    • โž• Add trigonometric operators (#12424)
    • ๐Ÿ‘ [MXNET-807] Support integer label type in ctc_loss operator (#12468)
    • [MXNET-876] make CachedOp a normal operator (#11641)
    • โž• Add index_copy() operator (#12810)
    • ๐Ÿ›  Fix getnnz operator for CSR matrix (#12908) - issue #12872
    • [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967)
    • โž• Add sample_like operators (#13034)
    • โž• Add gauss err function operator (#13229)
    • [MXNET -1030] Enhanced Cosine Embedding Loss (#12750)
    • โž• Add bytearray support back to imdecode (#12855, #12868) (#12912)
    • โž• Add Psroipooling CPU implementation (#12738)

    ๐Ÿ”‹ Feature improvements

    Operator

    • ๐Ÿ”จ [MXNET-912] Refactoring ctc loss operator (#12637)
    • ๐Ÿ”จ Refactor L2_normalization (#13059)
    • Customized and faster TakeOpForward operator on CPU (#12997)
    • ๐Ÿ‘ Allow stop of arange operator to be inferred from dims. (#12064)
    • Make check_isfinite, check_scale optional in clip_global_norm (#12042) add FListInputNames attribute to softmax_cross_entropy (#12701) [MXNET-867] Pooling1D with same padding (#12594)
    • โž• Add support for more req patterns for bilinear sampler backward (#12386) [MXNET-882] Support for N-d arrays added to diag op. (#12430)

    โšก๏ธ Optimizer

    • โž• Add a special version of Adagrad optimizer with row-wise learning rate (#12365)
    • โž• Add a Python SVRGModule for performing SVRG Optimization Logic (#12376)

    ๐Ÿ“œ Sparse

    • ๐Ÿ“œ Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664)
    • โž• Add Sparse support for logic operators (#12860)
    • โž• Add Sparse support for take(csr, axis=0) (#12889)

    ONNX

    • ONNX export - Clip operator (#12457)
    • โšก๏ธ ONNX version update from 1.2.1 to 1.3 in CI (#12633)
    • ๐Ÿ‘‰ Use modern ONNX API to load a model from file (#12777)
    • [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731)
    • ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646)
    • ONNX export/import: Selu (#12785)
    • ONNX export: Cleanup (#12878)
    • [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731)
    • ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067)
    • [MXNET-886] ONNX export: HardSigmoid, Less, Greater, Equal (#12812)

    MKLDNN

    • MKLDNN Forward FullyConnected op cache (#11611)
    • ๐Ÿ‘ [MXNET-753] Fallback when using non-MKLDNN supported operators (#12019)
    • MKLDNN Backward op cache (#11301)
    • Implement mkldnn convolution fusion and quantization. (#12530)
    • ๐Ÿ‘Œ Improve mkldnn fallback. (#12663)
    • โšก๏ธ Update MKL-DNN dependency (#12953)
    • โšก๏ธ Update MKLML dependency (#13181)
    • โœจ [MXNET-33] Enhance mkldnn pooling to support full convention (#11047)

    Inference

    • [MXNET-910] Multithreading inference. (#12456)
    • Tweaked the copy in c_predict_api.h (#12600)

    Other

    • ๐Ÿ‘Œ support for upper triangular matrices in linalg (#12904)
    • ๐Ÿ”จ Introduce Random module / Refactor code generation (#13038)
    • [MXNET-779]Add DLPack Transformation API (#12047)
    • Draw label name next to corresponding bounding boxes when the mapping of id to names is specified (#9496)
    • Track epoch metric separately (#12182)
    • Set correct update on kvstore flag in dist_device_sync mode (#12786)

    โšก๏ธ Frontend API updates

    Gluon

    • โšก๏ธ Update basic_layers.py (#13299)
    • ๐Ÿ‘ Gluon LSTM Projection and Clipping Support (#13056)
    • ๐Ÿ‘‰ Make Gluon download function to be atomic (#12572)
    • [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697)
    • โž• Add activation information for mxnet.gluon.nn._Conv (#12354)
    • Gluon DataLoader: avoid recursionlimit error (#12622)

    Symbol

    • โž• Addressed dumplicate object reference issues (#13214)
    • ๐Ÿ‘ป Throw exception if MXSymbolInferShape fails (#12733)
    • Infer dtype in SymbolBlock import from input symbol (#12412)

    โšก๏ธ Language API updates

    Java

    • [MXNET-1198] MXNet Java API (#13162)

    R

    • ๐Ÿ”จ Refactor R Optimizers to fix memory leak - 11374
    • โž• Add new Vignettes to the R package
      • Char-level Language modeling - 12670
      • Multidimensional Time series forecasting - 12664
    • ๐Ÿ›  Fix broken Examples and tutorials
      • Tutorial on neural network introduction - 12117
      • CGAN example - 12283
      • Test classification with LSTMs - 12263

    Scala

    • Explain the details for Scala Experimental (#12348)
    • [MXNET-716] Adding Scala Inference Benchmarks (#12721)
    • [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758)
    • NativeResource Management in Scala (#12647)
    • Ignore generated Scala files (#12928)
    • ๐Ÿ‘‰ Use ResourceScope in Model/Trainer/FeedForward.scala (#12882)
    • [MXNET-1180] Scala Image API (#12995)
    • โšก๏ธ Update log4j version of Scala package (#13131)
    • Review require() usages to add meaningful messages (#12570)
    • ๐Ÿ›  Fix Scala readme (#13082)

    Clojure

    • Introduction to Clojure-MXNet video link (#12754)
    • ๐Ÿ‘Œ Improve the Clojure Package README to Make it Easier to Get Started (#12881)
    • ๐Ÿ“ฆ MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387)
    • Port of Scala Image API to Clojure (#13107)

    Perl

    • ๐Ÿ”€ [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739)

    Julia

    ๐ŸŽ Performance benchmarks and improvements

    • โšก๏ธ Update mshadow for omp acceleration when nvcc is not present (#12674)
    • [MXNET-860] Avoid implicit double conversions (#12361)
    • โž• Add more models to benchmark_score (#12780)
    • โž• Add resnet50-v1 to benchmark_score (#12595)

    ๐Ÿ› Bug fixes

    • ๐Ÿ›  Fix for #10920 - increase tolerance for sparse dot (#12527)
    • [MXNET-1234] Fix shape inference problems in Activation backward (#13409)
    • ๐Ÿ›  Fix a bug in where op with 1-D input (#12325)
    • [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283)
    • โฑ [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234)
    • ๐Ÿ›  Fix speech recognition example (#12291)
    • ๐Ÿ›  Fix bug in 'device' type kvstore (#12350)
    • ๐Ÿ›  fix search result 404s (#12414)
    • ๐Ÿ›  Fix help in imread (#12420)
    • ๐Ÿ›  Fix render issue on < and > (#12482)
    • 0๏ธโƒฃ [MXNET-853] Fix for smooth_l1 operator scalar default value (#12284)
    • ๐Ÿ›  Fix subscribe links, remove disabled icons (#12474)
    • ๐Ÿ›  Fix broken URLs (#12508)
    • ๐Ÿ›  Fix/public internal header (#12374)
    • ๐Ÿ›  Fix lazy record io when used with dataloader and multi_worker > 0 (#12554)
    • ๐Ÿ›  Fix error in try/finally block for blc (#12561)
    • โž• Add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency between CPU & GPU code (#12557)
    • [MXNET-798] Fix the dtype cast from non float32 in Gradient computation (#12290)
    • ๐Ÿ›  Fix CodeCovs proper commit detection (#12551)
    • โž• Add TensorRT tutorial to index and fix ToC (#12587)
    • Fixed typo in c_predict_api.cc (#12601)
    • ๐Ÿ›  Fix typo in profiler.h (#12599)
    • ๐Ÿ›  Fixed NoSuchMethodError for Jenkins Job for MBCC (#12618)
    • [MXNET-922] Fix memleak in profiler (#12499)
    • [MXNET-969] Fix buffer overflow in RNNOp (#12603)
    • ๐Ÿ›  Fixed param coercion of clojure executor/forward (#12627) (#12630)
    • ๐Ÿ›  Fix version dropdown behavior (#12632)
    • ๐Ÿ›  Fix reference to wrong function (#12644)
    • ๐Ÿ›  Fix the location of the tutorial of control flow operators (#12638)
    • ๐Ÿ›  Fix issue 12613 (#12614)
    • ๐Ÿ‘ป [MXNET-780] Fix exception handling bug (#12051)
    • ๐Ÿ›  Fix bug in prelu, issue 12061 (#12660)
    • [MXNET-833] [R] Char-level RNN tutorial fix (#12670)
    • ๐Ÿ›  Fix static / dynamic linking of gperftools and jemalloc (#12714)
    • ๐Ÿ›  Fix #12672, importing numpy scalars (zero-dimensional arrays) (#12678)
    • [MXNET-623] Fixing an integer overflow bug in large NDArray (#11742)
    • ๐Ÿ›  Fix benchmark on control flow operators (#12693)
    • ๐Ÿ›  Fix regression in MKLDNN caused by PR 12019 (#12740)
    • ๐Ÿ›  Fixed broken link for Baidu's WARP CTC (#12774)
    • ๐Ÿ›  Fix CNN visualization tutorial (#12719)
    • ๐Ÿ‘ [MXNET-979] Add fix_beta support in BatchNorm (#12625)
    • R fix metric shape (#12776)
    • โช Revert [MXNET-979] Add fix_beta support in BatchNorm (#12625) (#12789)
    • ๐Ÿ›  Fix mismatch shapes (#12793)
    • ๐Ÿ›  Fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794)
    • Fixed setattr method of _MXClassPropertyMetaClass (#12811)
    • ๐Ÿ›  Fixed regex for matching platform type in Scala Benchmark scripts (#12826)
    • ๐Ÿ›  Fix broken links (#12856)
    • ๐Ÿ›  Fix Flaky Topk (#12798)
    • [MXNET-1033] Fix a bug in MultiboxTarget GPU implementation (#12840)
    • ๐Ÿ“Œ [MXNET-1107] Fix CPUPinned unexpected behaviour (#12031)
    • Fix all in optimizer/optimizer.py (#12886)
    • ๐Ÿ›  Fix Batch input issue with Scala Benchmark (#12848)
    • ๐Ÿ›  fix type inference in index_copy. (#12890)
    • ๐Ÿ›  Fix the paths issue for downloading script (#12913)
    • ๐Ÿ›  Fix indpt[0] for take(csr) (#12927)
    • ๐Ÿ›  Fix the bug of assigning large integer to NDArray (#12921)
    • ๐Ÿ›  Fix Sphinx errors for tutorials and install ToCs (#12945)
    • ๐Ÿ›  Fix variable name in tutorial code snippet (#13052)
    • ๐Ÿ›  Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954)
    • ๐Ÿ›  Fix a typo in operator guide (#13115)
    • ๐Ÿ›  Fix variational autoencoder example (#12880)
    • ๐Ÿ›  Fix problem with some OSX not handling the cast on imDecode (#13207)
    • [MXNET-953] Fix oob memory read (#12631)
    • ๐Ÿ›  Fix Sphinx error in ONNX file (#13251)
    • [Example] Fixing Gradcam implementation (#13196)
    • ๐Ÿ›  Fix train mnist for inception-bn and resnet (#13239)
    • ๐Ÿ›  Fix a bug in index_copy (#13218)
    • ๐Ÿ›  Fix Sphinx errors in box_nms (#13261)
    • ๐Ÿ›  Fix Sphinx errors (#13252)
    • ๐Ÿ›  Fix the cpp example compiler flag (#13293)
    • ๐Ÿ“œ Made fixes to sparse.py and sparse.md (#13305)
    • [Example] Gradcam- Fixing a link (#13307)
    • Manually track num_max_thread (#12380)
    • [Issue #11912] throw mxnet exceptions when decoding invalid images. (#12999)
    • Undefined name: load_model() --> utils.load_model() (#12867)
    • ๐Ÿ”„ Change the way NDArrayIter handle the last batch (#12545)
    • โž• Add embedding to print_summary (#12796)
    • ๐Ÿ‘ Allow foreach on input with 0 length (#12471)
    • [MXNET-360]auto convert str to bytes in img.imdecode when py3 (#10697)
    • ๐Ÿ›  Fix unpicklable transform_first on windows (#13686)

    โšก๏ธ Licensing updates

    • โž• Add license headers to R-package (#12559)
    • License header (#13178)
    • โž• add url and license to clojure package project (#13304)
    • V1.4.x RAT check fix (#14156)
    • โž• add license to pom files (#14155)

    ๐Ÿ‘Œ Improvements

    Tutorial

    • [MXNET-422] Distributed training tutorial (#10955)
    • โž• Add a tutorial for control flow operators. (#12340)
    • โž• Add tutorial Gotchas using NumPy (#12007)
    • โšก๏ธ Updated Symbol tutorial with Gluon (#12190)
    • ๐Ÿ‘Œ Improve tutorial redirection (#12607)
    • Include missing import in TensorRT tutorial (#12609)
    • โšก๏ธ Update Operator Implementation Tutorial (#12230)
    • โž• Add a tutorial for the subgraph API. (#12698)
    • ๐Ÿ‘Œ Improve clojure tutorial (#12974)
    • โšก๏ธ Update scala intellij tutorial (#12827)
    • [Example] Gradcam consolidation in tutorial (#13255)
    • [MXNET-1203] Tutorial infogan (#13144)
    • [MXNET-703] Add a TensorRT walkthrough (#12548)

    Example

    • โšก๏ธ Update C++ example so it is easier to run (#12397)
    • [MXNET-580] Add SN-GAN example (#12419)
    • [MXNET-637] Multidimensional LSTM example for MXNetR (#12664)
    • [MXNET-982] Provide example to illustrate usage of CSVIter in C++ API (#12636)
    • [MXNET-947] Expand scala imclassification example with resnet (#12639)
    • MKL-DNN Quantization Examples and README (#12808)
    • Extending the DCGAN example implemented by gluon API to provide a more straight-forward evaluation on the generated image (#12790)
    • โšก๏ธ [MXNET-1017] Updating the readme file for cpp-package and adding readme file for example directory. (#12773)
    • โšก๏ธ Update tree lstm example (#12960)
    • โšก๏ธ Update bilstm integer array sorting example (#12929)
    • โšก๏ธ Updated / Deleted some examples (#12968)
    • โšก๏ธ Update module example (#12961)
    • โšก๏ธ Update adversary attack generation example (#12918)
    • โšก๏ธ Update Gluon example folder (#12951)
    • โšก๏ธ Update dec example (#12950)
    • โšก๏ธ Updated capsnet example (#12934)
    • โšก๏ธ Updates to several examples (#13068)
    • โšก๏ธ Update multi-task learning example (#12964)
    • โœ‚ Remove obsolete memory cost example (#13235)
    • โšก๏ธ [Example] Update cpp example README (#13280)
    • โšก๏ธ [Example]update NER example readme on module prediction (#13184)
    • โšก๏ธ Update proposal_target.py (#12709)
    • Removing the re-size for validation data, which breaking the validation accuracy of CIFAR training (#12362)
    • โšก๏ธ Update the README with instruction to redirect the user to gluon-cv (#13186)

    ๐Ÿ“š Documentation

    • โšก๏ธ Update ONNX API docs references (#12317)
    • ๐Ÿ“š Documentation update related to sparse support (#12367)
    • ๐Ÿ’… Edit shape.array doc and some style improvements (#12162)
    • ๐Ÿ›  Fixed docs/website build checkout bug (#12413)
    • โž• Add Python API docs for test_utils and visualization (#12455)
    • ๐Ÿ›  Fix the installation doc for MKL-DNN backend (#12534)
    • โž• Added comment to docs regarding ToTensor transform (#12186)
    • ๐Ÿ“Œ Pinned dockcross to a tag with fixed ABI for RPi (#12588)
    • ๐Ÿ“š Refine the documentation of im2rec (#12606)
    • โšก๏ธ Update and modify Windows docs (#12620)
    • โšก๏ธ update docs to list cmake required for build from source page (#12592)
    • โšก๏ธ update the distributed_training document (#12626)
    • โž• Add docstring in im2rec.py (#12621)
    • ๐Ÿ“ฆ [Doc] Change the description for pip packages (#12584)
    • ๐Ÿ“š Change dependencies documentation opencv2-->opencv (#12654)
    • โž• Add documents for two new environment variables for memory pool. (#12668)
    • ๐Ÿ“„ Scala Docs - Replace old Symbol api usages (#12759)
    • โž• add/update infer_range docs (#12879)
    • ๐Ÿ›  Fix typo in formula in docstring for GRU cell and layer and add clarification to description (gluon.rnn) (#12896)
    • ๐Ÿ›  Fix the operator API documentation (#12942)
    • ๐Ÿ›  fix broken docs (#12871)
    • ๐Ÿ›  fix mac r install and windows python build from source docs (#12919)
    • Document the newly added env variable (#13049)
    • โž• Add documentation on GPU performance on Quantization example (#13145)
    • ๐Ÿ›  Fix Sphinx python docstring formatting error. (#13177)
    • ๐Ÿ— [Doc] Fix repo paths in Ubuntu build doc (#13101)
    • ๐Ÿ›  Fix Sphinx document parsing error. (#13195)
    • ๐Ÿ›  Fix #13090, Add image.imread to python API doc. (#13176)
    • ๐Ÿ›  Fix Sphinx docstring formatting error. (#13004, #13005, #13006) (#13175)
    • ๐Ÿ›  Fix #12944, Fix Sphinx python docstring formatting error. (#13174)
    • ๐Ÿ›  Fix #13013, Fix Sphinx python docstring error. (#13173)
    • ๐Ÿ›  Fixed Sparse astype doc string formatting error (#13171)
    • ๐Ÿ›  Fixed Documentation issues (#13215)
    • โšก๏ธ update the doc (#13205)
    • ๐Ÿ›  Fix Sphinx doc errors (#13170)
    • ๐Ÿ›  Fix Sphinx python docstring error: initializer.InitDesc (#12939) (#13148)
    • ๐Ÿ›  Fix Sphinx python docstring error: text contrib module (#12949) (#13149)
    • ๐Ÿ›  Fix Sphinx python docstrings (#13160)
    • โž• Add Java API docs generation (#13071)
    • ๐Ÿ›  Fix scaladoc build errors (#13189)
    • โž• Add missing documentations for getnnz (#13128)
    • โž• Addressed ONNX module documentation warnings and added notes for short-form representation (#13259)
    • ๐Ÿ›  Doc fixes (#13256)
    • โž• Addressed doc issues (#13165)
    • ๐Ÿ— stop gap fix to let website builds through; scaladoc fix pending (#13298)
    • ๐Ÿ›  Fix Sphinx python docstring formatting error. (#13194)
    • Visualization doc fix. Added notes for shortform (#13291)
    • โšก๏ธ [Example] Add docstring for test optimizer and test score (#13286)
    • ๐Ÿ›  Fix descriptions in scaladocs for macro ndarray/sybmol APIs (#13210)
    • Sphinx error reduction (#12323)
    • Sphinx errors in Gluon (#13275)
    • โšก๏ธ Update env_var.md (#12702)
    • โšก๏ธ Updated the Instructions for use of the label bot (#13192)
    • โž• Added/changed file_name, brief description comments in some files (#13033)

    Website

    • โž• adding apache conf promo to home page (#12347)
    • Consistent website theme and custom 404 (#12426)
    • โšก๏ธ update apachecon links to https (#12521)
    • ๐Ÿš€ [HOLD] 1.3.0 release website updates (#12509)
    • โž• add mentions of the gluon toolkits and links to resources (#12667)
    • โœ‚ remove apachecon promo (#12695)
    • [MXNet-1002] Add GluonCV and NLP tookits, Keras, and developer wiki to navigation (#12704)

    MXNet Distributions

    • ๐Ÿณ Make the output of ci/docker/install/ubuntu_mklml.sh less verbose (#12422)
    • ๐Ÿ›  Fix tvm dependency for docker (#12479)
    • ๐Ÿณ [MXNET-703] Add TensorRT runtime Dockerfile (#12549)
    • ๐Ÿš€ [MXNET-951] Python dockerfiles built on pip binaries and build/release script (#12556)
    • ๐Ÿ”„ Change numpy version to 1.15.2 in python and docker install requirements (#12711)
    • โž• Add mkl-dnn to docker install method (#12643)
    • ๐Ÿ›  Fix docker cleanup race condition (#13092)
    • ๐Ÿ›  Bugfix in ci/docker_cache.py (#13249)
    • โšก๏ธ Update PyPI version number (#11773)
    • โšก๏ธ update download links to apache distros (#12617)

    Installation

    • Installation instructions consolidation (#12388)
    • Refine mxnet python installation (#12696)
    • โšก๏ธ R install instructions update for macOS (#12832)
    • โœ‚ remove legacy installation of Roxygen2 5.0 and add R-specific clean target (#12993) (#12998)
    • โšก๏ธ Force APT cache update before executing install (#13285)
    • ๐Ÿ‘‰ Make the Ubuntu scripts executable after download. (#12180)
    • ๐Ÿ replacing windows setup with newer instructions (#12504)
    • โšก๏ธ Updated download links and verification instructions (#12651)
    • โœ‚ Remove pip overwrites (#12604)

    ๐Ÿ— Build and CI

    • ๐Ÿ— [MXNET-908] Enable minimal OSX Travis build (#12462)
    • ๐Ÿ Use jom for parallel Windows builds (#12533)
    • ๐Ÿ— [MXNET-950] Enable parallel R dep builds in CI (#12552)
    • ๐Ÿ Speed up CI Windows builds (#12563)
    • ๐Ÿ— [MXNET-908] Speed up travis builds to avoid timeouts (#12706)
    • ๐Ÿ— Simplify mac MKLDNN build (#12724)
    • ๐Ÿ— [MXNET-674] Speed up GPU builds in CI (#12782)
    • ๐Ÿ‘Œ Improved git reset for CI builds (#12784)
    • ๐Ÿ‘Œ Improve cpp-package example project build files. (#13093)
    • โž• Add --no-cache option to build.py when building containers (#13182)
    • โž• Addressed sphinx build issue (#13246)
    • ๐Ÿ‘• Tighten up PyLint directives again (#12322)
    • ๐Ÿ‘ท [MXNET-859] Add a clang-tidy stage to CI (#12282)
    • ๐Ÿ‘ท A solution to prevent zombie containers locally and in CI (#12381)
    • ๐ŸŒฒ [MXNET-696][PYTHON][UNDEFINED NAME] import logging in ci/util.py (#12488)
    • [MXNET-703] Static linking for libprotobuf with TensorRT (#12475)
    • โœ‚ Remove regression checks for website links (#12507)
    • ๐Ÿ‘ท [MXNET-953] - Add ASAN sanitizer, Enable in CI (#12370)
    • ๐Ÿ‘ Allow custom path and static linking for custom mallocs in make (#12645)
    • Correct PR branch detection in code coverage (#12615)
    • โšก๏ธ Update osx.mk - Added apple to USE_BLAS comment (#12819)
    • [MXNET-953] Correct ASAN cflags flag (#12659)
    • ๐Ÿ‘ [MXNET-1025] Add Jetpack 3.3 support to Jetson (#12735)
    • ๐Ÿ‘ท Fail the broken link job when broken links are found (#12905)
    • โœ‚ Removed unused header (#13066)
    • โ†ช Maven Surefire bug workaround (#13081)
    • โž• Add Turing and Volta support to arch_name (#13168)
    • ๐Ÿšš Moves f16c autodetection to its own cmake module (#12331)
    • la_op_inline.h to la_op-inl.h for consistency (#13045)
    • ๐Ÿ‘ท [MXNET-793] Virtualized ARMv7 with Qemu CI integration (#13203)
    • โœ‚ Remove unused variable rotateM_ (#10803)
    • ๐Ÿ”จ Separate refactoring from #12276 in a prior PR (#12296)
    • ๐Ÿšš [MXNET-860] Remove std::moves that have no affect (#12730)
    • [MXNET-860] Use emplace where helpful (#12694)
    • Enable C++ coverage (#12642)
    • โšก๏ธ [MXNET-860] Update to modern nullptr usage (#12352)
    • [MXNET-860] Reduce redundant copies, check for regressions with clang-tidy (#12355)

    3rd party

    TVM:
    • โšก๏ธ Updated tvm submodule head (#12764)
    • โšก๏ธ Updated tvm submodule head (#12448)
    CUDNN:
    • [MXNET-1179] Enforce deterministic algorithms in convolution layers (#12992)
    • CudnnFind() usage improvements (#12804)
    • โž• Add option for automatic downcasting dtype for cudnn to allow using Tensorcore for fp32 (#12722)
    Horovod:
    • ๐Ÿšš [MXNET-1111] Remove CPUPinned in ImageRecordIter (#12666)

    ๐Ÿ—„ Deprecations

    • โž• Add a deprecate message (#13042) contrib_CTCLoss is deprecated. Added a message in command

    Other

    • โšก๏ธ Updating news, readme files and bumping master version to 1.3.1 (#12525)
    • โž• Add new name to CONTRIBUTORS.md (#12763)
    • โšก๏ธ Update contribute.md (#12685)
    • โšก๏ธ Updated CONTRIBUTORS.md to include lebeg and gigasquid, moved mabreu to committers section (#12766)
    • โšก๏ธ Update CONTRIBUTORS.md (#12996)
    • โšก๏ธ Updated CONTRIBUTORS.md to include mxnet-label-bot (#13048)

    ๐Ÿ— How to build MXNet

    Please follow the instructions at https://mxnet.incubator.apache.org/install/index.html

    โšก๏ธ List of submodules used by Apache MXNet (Incubating) and when they were updated last

    โšก๏ธ Submodule@commit ID::Last updated by MXNet:: Last update in submodule

    • cub@05eb57f::Jul 31, 2017 :: Jul 31, 2017
    • dlpack@10892ac:: Oct 30, 2017 :: Aug 23, 2018
    • dmlc-core@0a0e8ad:: Aug 15, 2018 :: Nov 15, 2018
    • โœ… googletest@ec44c6c:: July 14, 2016 :: July 14, 2016
    • mkldnn @ 722901c:: Feb 13, 2019 :: Feb 12, 2019
    • mshadow@696803b:: Sep 28, 2018 :: Nov 7, 2018
    • onnx-tensorrt@3d8ee04:: Aug 22, 2018 :: Nov 10, 2018
    • openmp@37c7212: Nov 22, 2017 :: Nov 13, 2018
    • ps-lite@8a76389: April 25, 2018 :: Oct 9, 2018
    • tvm@0f053c8: Oct 10, 2018 :: Oct 8, 2018