Kenji KAWAGUCHINUS Presidential Young Professor
Postdoctoral Fellow, Harvard University
Ph.D. in Computer Science, MIT
S.M. in EECS, MIT
- Artificial Intelligence
- Algorithms & Theory
- Deep learning in both theory and practice
- Machine learning theory
- Physics-informed neural networks
- Deep learning + X
- Nonconvex optimization
Kenji Kawaguchi is a Presidential Young Professor in the Department of Computer Science. Kenji Kawaguchi is an invited participant at the University of Cambridge, Isaac Newton Institute for Mathematical Sciences program on "Mathematics of Deep Learning". He is one of 77 invited participants from around the world. Kenji Kawaguchi received his Ph.D. in Computer Science and S.M. in Electrical Engineering and Computer Science from Massachusetts Institute of Technology (MIT). He then joined Harvard University as a postdoctoral fellow. His research interests include deep learning theory as well as deep learning and artificial intelligence (AI) in general. His research group aims to have a positive feedback loop between theory and practice in deep learning research through collaborations with researchers from both practice and theory sides.
Kenji Kawaguchi. On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers. In International Conference on Learning Representations (ICLR), 2021.
Linjun Zhang*, Zhun Deng*, Kenji Kawaguchi*, Amirata Ghorbani, James Zou. How Does Mixup Help With Robustness and Generalization? In International Conference on Learning Representations (ICLR), 2021.
Keyulu Xu*, Mozhi Zhang, Stefanie Jegelka, Kenji Kawaguchi*. Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth. International Conference on Machine Learning (ICML), 2021.
Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V Le. Towards Domain-Agnostic Contrastive Learning. International Conference on Machine Learning (ICML), 2021.
Vikas Verma, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang. GraphMix: Improved Training of GNNs for Semi-Supervised Learning. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), 2021.
Ameya D. Jagtap, Kenji Kawaguchi, George E. Karniadakis. Adaptive Activation Functions Accelerate Convergence in Deep and Physics-informed Neural Networks. Journal of Computational Physics, 404, 109136, 2020.
Ameya D. Jagtap*, Kenji Kawaguchi*, George E. Karniadakis. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proceedings of the Royal Society A, 476, 20200334, 2020.
Kenji Kawaguchi. Deep Learning without Poor Local Minima. In Advances in Neural Information Processing (NeurIPS), 2016.
Kenji Kawaguchi, Leslie Pack Kaelbling and Tomás Lozano-Pérez. Bayesian Optimization with Exponential Convergence. In Advances in Neural Information Processing (NeurIPS), 2015.
Awards & Honours
- CS5339: Theory and Algorithms for Machine Learning