It was a period when we aimmed at ICCV and revised the manuscript of TIP. Papers that I read from other people’s summary or browsed online might be omitted. The brief summary may not cover the whole work. Welcome to discuss with me on following papers or to recommend more related papers.

 

  • Reinforcement learning:
    • [NIPS1999] Policy Gradient Methods for Reinforcement Learning with Function Approximation [classical RL training method for network]
    • [ICLR2017] Attend, adapt and transfer: attentive deep architecture for adaptive transfer from multiple source in the same domain
    • [arXiv 2013] Playing Atari with Deep Reinforcement Learning [Deep Q-learning: approximating the Q function such that Q can maximize the reward in the next step]
  • GAN/Adversarial Learning (Notes, Notes)
    • [NIPS 2014] Generative Adversarial Networks
    • [arXiv 2014] Conditional Generative Adversarial Nets
    • [ICML 2015] Generative Moment Matching Networks [with MMD]
    • [NIPS 2015] Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks [Laplacian pyramid GAN, generate on multiple scales]
    • [ICLR 2016] Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [DCGAN]
    • [ICML 2016] Generative Adversarial Text to Image Synthesis [Text+CondGAN]
    • [arXiv 2016] Learning from Simulated and Unsupervised Images through Adversarial Training [compete between the synthetic and the real]
    • [arXiv 2016] Image-to-Image Translation with Conditional Adversarial Network [I2I, add L2 loss to CondGAN]
    • [arXiv 2016] Generating images with recurrent adversarial networks [generate image step by step]
    • [arXiv 2016] generating image with recurrent adversarial network [RNN+GAN]
    • [NIPS 2016] Dual Learning for Machine Translation [learning both the major and its dual task, AE is a special case]
      • [arxiv 2015] On Using Monolingual Corpora in Neural Machine Translation [basic neural machine translation]
    • [NIPS 2016] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets [InfoGAN, mutual information]
    • [ICML 2016] Autoencoding beyond pixels using a learned similarity metric [VAE+GAN]
    • [ICLR 2016] Adversarial Autoencoders [AE+GAN]
    • [NIPS 2016] Tutorial: Generative Adversarial Networks
    • [arXiv 2016] f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization 
    • [arXiv 2017] Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities [LS-GAN]
    • [arXiv 2017] Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks [CycleGAN]
    • [arXiv 2017] Wasserstein GAN [use wasserstein distance to replace JS divergence]
      • [Springer 2009] Optimal Transport [Wasserstein distance]
      • [ICLR 2017] Towards Principled Methods for Training Generative Adversarial Networks [WGAN v0]
      • [arXiv] Improved Training of Wasserstein GANs [restrict the norm of gradient rather than clipping]
    • [arXiv 2017] Aspect-augmented Adversarial Networks for Domain Adaptation [attention-liked adaptation in NLP]
    • [ICLR 2017] Energy-based Generative Adversarial Network [AE as energy function]
  • Person Re-identification
    • [CVPR 2013] Unsupervised salience learning for person re-identification [densely sample batches, calculate the salience by searching neighbor]
    • [BMVC 2015] Dictionary Learning with Iterative Laplacian Regularisation for Unsupervised Person Re-identification [pseudo-labels] 
    • [CVPR 2015] Transferring a Semantic Representation for Person Re-Identification and Search [Learning attribute from auxiliary datasets]
    • [arXiv 2016] Deep Transfer Learning for Person Re-identification [two step finetune]
    • [CVPR 2016] Unsupervised cross-dataset transfer learning for person re-identification [multi-task dictionary learning]
    • [ECCV 2016] Person re identification by unsupervised l1 graph learning [built a graph with pseudo-labels and iterate it with L1 restriction]
    • [arXiv 2017] Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro [use GAN to generate person, which is labeled as “smooth” (average confidence on all id)]
    • [CVPR 2016] Top-push Video-based Person Re-identification [Push the most violated triple]
  • Action Recognition/Network
    • [NIPS 2014] Two-Stream Convolutional Networks for Action Recognition in Videos [optical flow+RGB]
    • [CVPR 2016] Joint Unsupervised Learning of Deep Representations and Image Clusters [update image cluster and representation iteratively]
    • [CVPR 2016] Convolutional Two-Stream Network Fusion for Video Action Recognition [fusion]
  • Domain adaptation
    • [ICCV 2016] Simultaneous Deep Transfer Across Domains and Tasks [Smooth accuracy across different domains]
    • [ECCV 2016] Deep reconstruction-classification networks for un-supervised domain adaptation [Labeled supervision+joint reconstruction]
    • [NIPS 2016] Adversarial discriminative domain adaptation [GAN loss as adaptation loss]
    • [JMLR 2016] Domain-Adversarial Training of Neural Networks [reverse gradient]
  • Neural Network 
    • [JMLR 2015] Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift [maintain the distribution of activation in a controllable way]
    • [arXiv 2016] Decoupled Neural Interfaces using Synthetic Gradients [Using an extra component to estimate the gradients for each layer, and the real gradients are used to update the estimator]
    • [arXiv 2016] Progressive Neural Networks [Share knowledge from previous tasks by connecting new layer to the previously trained network]
    • [arXiv 2016] Layer Normalization [Normalizing the activations in the same layer (rather than batch)]
  • Attention
    • [NIPS 2015] Spatial Transformer Networks [Location-wise attention with differentiable transform]
    • [ICML 2015] DRAW: A Recurrent Neural Network For Image Generation [attention read/write]
    • [NIPS 2014] Recurrent Models of Visual Attention [Multiple glimpse using RNN]
    • [ICLR 2016w] Action Recognition using Visual Attention [Soft-attention based attention, spatial-temporal LSTM]

 

Legend:

  • Studied
  • Still processing
  • Not fininshed
  • Skimming

 

None