- Each time, we will have 1~3 presenters.
- Each presenter should present a bunch of paper of related topics.
- All topics related with machine learning are welcomed.
Homework: Choose 3~5 your favorite papers, present them and explain why.
If you want to attend, just send me an email.
|30/9 2019||Chence Shi, Tianyuan Zhang||N/A, N/A||Zhang: |
--- Image-to-image translation
Image-to-image translation with conditional adversarial networks
Photographic image synthesis with cascaded refinement networks.
Unpaired image-to-image translation using cycle-consistent adversarial networks
Unsupervised image-to-image translation networks.
Multimodal unsupervised image-to-image translation
--- Style transfer & Conditional normalization
A neural algorithm of artistic style.
Perceptual losses for real-time style transfer and super-resolution.
Arbitrary style transfer in real-time with adaptive instance normalization.
A style-based generator architecture for generative adversarial networks
Semantic image synthesis with spatially-adaptive normalization
Few-shot unsupervised image-to-image translation.
|6/10 2019||Baifeng Shi, Ziqi Pang||Shi, ppt, N/A||Shi: |
Internal Statistics of a Single Natural Image
Zero-Shot” Super-Resolution using Deep Internal Learning
Blind Deblurring Using Internal Patch Recurrence.
Semi-parametric Image Synthesis.
|13/10 2019||Ziqi Pang, Chence Shi||Pang: Distillation ppt||Pang: |
Distilling the Knowledge in a Neural Network
Label Refinery: Improving ImageNet Classification through Label Progression
A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention
Structured Knowledge Distillation for Semantic Segmentation <Diversity with Cooperation: Ensemble Methods for Few-Shot Classification> <Learning without Forgetting, Progress & Compress: A scalable framework for continual learning>
|20/10 2019||Dinghuai Zhang||Reweighting on training deep networks||MentorNet: Regularizing Very Deep Neural Networks on Corrupted Labels |
Focal Loss for Dense Object Detection
Gradient harmonized single-stage detector
|25/10 2019||Tianyuan Zhang, Hengyi Wang||Self-supervised learning, Advserial example in NLP||N/A|
|03/11 2019||Baifeng Shi||CNN prior and image interal statitics||SinGAN: internal statistics + GAN (ICCV2019 Best Paper Award) |
Deep Image Prior (CVPR2018)
An Internal Learning Approach to Video Inpainting
Where is the Information in a Deep Neural Network
Bruce, Neil, and John Tsotsos. “Saliency based on information maximization.(NIPS2006)
|10/11, 17/11 2019||CVPR Break||N/A||N/A|