About Me
(Nov. 2022 Update: It seems the https service has expried recently for my website. If you see a REALLY BIG selfie of me (sorry about this, I’m not a narcissist..), that means you are probably using the https instead of http to access my website (will fix this soon!) Click here to switch to the normal view.)
Dec. 2022: I am actively looking for research internship opportunities (on efficient/mobile/edge deep learning for vision tasks) in 2023 summer. If you are interested, welcome to drop me an email (wang.huan [at] northeastern [dot] edu). Thanks!
My name is Huan Wang (Chinese: 王欢. My first name means “happiness/joy” in Chinese, a simple and ultimate wish from my parents). I am now a 4th-year Ph.D. candidate at SMILE Lab of Northeastern University (Boston, USA), advised by Prof. Yun (Raymond) Fu. Before that, I received my M.S. and B.E. degrees from Zhejiang University (Hangzhou, China) in 2019 and 2016, respectively, advised by Prof. Haoji Hu. During 2018 summer, I visited VLLab at University of California, Merced, luckily working with Prof. Ming-Hsuan Yang. I have been very lucky to work with fatanstic industrial guys from Alibaba Group / MERL / Snap during my summer internships.
I am interested in a variety of topics in computer vision and machine learning. My research works orbit efficient deep learning (a.k.a. model compression), spanning from the most common image classifcation task (GReg, Awesome-PaI, TPP) to neural style transfer (Collaborative-Distillation), single image super-resolution (ASSL, SRP), and 3D novel view synthesis (R2L).
I do my best towards easily reproducible research.
News
- 2023.01: [ICLR’23] Two papers accpeted by ICLR’23: Trainabaility Preserveing Neural Pruning and Image as Set of Points (Oral, 5%).
- 2023.01: [Preprint] 🔥Check out our preprint work that deciphers the so confusing benchmark situation in neural network (filter) pruning: [Arxiv] [Code]
- 2022.12: [Preprint] 🔥Check out our new blazing fast🚀 neural rendering model on mobile devices: MobileR2L can render 1008x756 images at 56fps on iPhone13 [Arxiv] [Code]
- 2022.12: Recognized among 2022 Snap Fellowship Honorable Mentions. Thanks to Snap!
- 2022.10: [NeurIPS’22] Awarded the NeurIPS’22 Scholar Award. Thanks to NeurIPS!
- 2022.09: [NeurIPS’22] 3 papers accepted by NeurIPS’22: One on my lead (it was my 1st internship work at MERL in 2020 summer. Rejected 4 times. Now finally I close the loop. Thanks to my co-authors and the reviewers!), two collaborations. Code: Good-DA-in-KD, PEMN, AFNet.
- 2022.09: [TIP’22] One journal paper “Semi-supervised Domain Adaptive Structure Learning” accepted by TIP. Congtrats to Can!
- 2022.07: [ECCV’22] We present the first residual MLP network to represent neural light field (NeLF) for efficient novel view synthesis. Check our webpage and arxiv!
- 2022.04: [IJCAI’22] We offer the very first survey paper on Pruning at Initialization, accepted by IJCAI’22 [Arxiv] [Paper Collection].
- 2022.01: [ICLR’22] Two papers on neural network sparsity accepted by ICLR’22. One is about efficient image super-resolution (SRP), the other about lottery ticket hypothsis (DLTH).
- 2021.09: [NeurIPS’21] One paper on efficient image super-resolution is accepted by NeurIPS’21 as a Spotlight paper (<3%)! [PyTorch Code]
- 2021.06: [Internship’21] Start summer internship at Snap Inc., working with the fantastic Creative Vision team.
- 2021.01: [ICLR’21] One paper about neural network pruning accepted by ICLR’21 as poster. [Arxiv] [PyTorch Code]
- 2020.06: [Internship’20] Start summer internship at MERL, working with Dr. Mike Jones and Dr. Suhas Lohit. (09/2022 Update: Finally, paper of this project accpeted by NeurIPS’22. Thanks to my co-authors and the reviewers!)
- 2020.02: [CVPR’20] One paper about model compression for ultra-resolution neural style transfer is accepted by CVPR 2020. Code released here.
- 2020.01: [MLSys’20] 2019 summer intern paper accepted by MLSys 2020. (Project: MNN from Alibaba, one of the fastest mobile AI engines on this planet. Welcome trying!)
- 2019.12: [JSTSP’19] One journal paper accepted by IEEE JSTSP.
- 2019.09: Join SMILE Lab at NEU (Boston, USA) to pursue my Ph.D. degree.
- 2019.07: [Internship’19] Start summer internship at Taobao of Alibaba Group at Hangzhou, China.
- 2019.06: Graduate with M.Sc. degree from Zhejiang University, Hangzhou, China.
Preprint
![]() |
Huan Wang, Can Qin, Yue Bai, Yun Fu.
"Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning".
Preprint, 2023.![]() ![]() |
![]() |
Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makoviichuk, Sergey Tulyakov, Jian Ren. "Real-Time Neural Light Field on Mobile Devices". Preprint, 2022. ![]() ![]() ![]() |
Selected Publications
![]() |
Huan Wang, Yun Fu. "Trainability Preserving Neural Pruning". In ICLR, 2023. ![]() ![]() ![]() |
![]() |
Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu. "Image as Set of Points". In ICLR (Oral, 5%), 2023. ![]() ![]() |
![]() |
Huan Wang, Suhas Lohit, Mike Jones, Yun Fu. "What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective". In NeurIPS, 2022. ![]() ![]() ![]() |
|
Yue Bai, Huan Wang, Xu Ma, Yitian Zhang, Zhiqiang Tao, Yun Fu. "Parameter-Efficient Masking Networks". In NeurIPS, 2022. ![]() ![]() |
|
Yitian Zhang, Yue Bai, Huan Wang, Yi Xu, Yun Fu. "Look More but Care Less in Video Recognition". In NeurIPS, 2022. ![]() ![]() |
![]() |
Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, Sergey Tulyakov. "R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis". In ECCV, 2022. ![]() ![]() ![]() |
![]() |
Huan Wang, Can Qin, Yue Bai, Yulun Zhang, Yun Fu. "Recent Advances on Neural Network Pruning at Initialization". In IJCAI, 2022. ![]() ![]() |
![]() |
Huan Wang*, Yulun Zhang*, Can Qin, Yun Fu. "Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning". In ICLR, 2022. (*Equal Contribution) ![]() ![]() |
![]() |
Yue Bai, Huan Wang, Zhiqiang Tao, Kunpeng Li, Yun Fu. "Dual Lottery Ticket Hypothesis". In ICLR, 2022. ![]() ![]() |
![]() |
Huan Wang*, Yulun Zhang*, Can Qin, Yun Fu. "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution". In NeurIPS (Spotlight), 2021. (*Equal Contribution) ![]() ![]() |
![]() |
Huan Wang, Can Qin, Yulun Zhang, Yun Fu. "Neural Pruning via Growing Regularization". In ICLR, 2021. ![]() ![]() |
![]() |
Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, Ming-Hsuan Yang. "Collabrotive Distillation for Ultra-Resolution Universal Style Transfer". In CVPR, 2020. ![]() ![]() |
![]() |
Xiaotang Jiang, Huan Wang, Yiliu Chen, Ziqi Wu, et al. "MNN: A Universal and Efficient Inference Engine". In MLSys (Oral), 2020. ![]() ![]() |
![]() |
Huan Wang, Xinyi Hu, Qiming Zhang, Yuehai Wang, Lu Yu, Haoji Hu. "Structured Pruning for Efficient ConvNets via Incremental Regularization". In NeurIPS Workshop, 2018; IJCNN, 2019 (Oral); Journal extension to IEEE JSTSP, 2019. ![]() ![]() |
![]() |
Huan Wang*, Qiming Zhang*, Yuehai Wang, Haoji Hu. "Structured Probabilistic Pruning for Convolutional Neural Network Acceleration". In BMVC, 2018 (Oral). ![]() ![]() |
Academic Services
- Journal Reviewer: IJCV, TIP, TNNLS, JSTSP, Neurocomputing, etc.
- Conference Reviewer: CVPR, ECCV, ICML, NeurIPS, ICLR, AAAI, IJCAI, MLSys, etc.