Machine Learning Paper Reading Notes #03: meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting The paper "meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting" from ICML 2017 by researchers at Peking University. This paper presents a technique to speed up machine learning model training
Paper Paper Reading Notes #02: Towards Fully Sparse Training: Information Restoration with Spatial Similarity Computation graph from original paperPruning is a popular technique for reducing the size of deep neural networks without sacrificing accuracy. However, traditional pruning methods can be computationally expensive and lack to hardware support.
Paper Paper Reading Notes #01: Attention Is All You Need This is a new series of my notes on paper reading, covering various areas in computer architecture, algorithms, and machine learning. The first paper is Attention Is All You Need from Google that
Tensorflow Tensorflow workflow Explanation Data Dataset Create dataset Prepossess dataset Mapping features and label to dataset Shuffle the dataset Create batched data Create an iterator for loading data Use the iterator for feeding data or in a