Jinwoo Kim

jinwoo-kim [at] kaist.ac.kr

I am a Ph.D. student at KAIST School of Computing, advised by Seunghoon Hong. My name 진우 眞友 is pronounced [jeen-oo] in Korean.

I am interested in endowing current deep learning models with proper inductive biases to enhance generalization and enable learning from limited data. I have been studying this problem primarily in the context of geometric deep learning, focusing on general-purpose deep neural networks that maintain invariances to group symmetries for solving tasks on graphs and structured data.

CV  /  Google Scholar  /  GitHub  /  Twitter  /  LinkedIn

profile photo

News

Nov 2023: I gave an invited presentation on Probabilistic Symmetrization at Machine Learning Lab @ POSTECH.
Oct 2023: Orbit Distance Minimization was accepted to NeurIPS 2023 NeurReps Workshop.
Sep 2023: Probabilistic Symmetrization was accepted to NeurIPS 2023 as a spotlight presentation.
May 2023: Visual Token Matching was introduced in the latest issue of Nikkei Robotics.
Mar 2023: Visual Token Matching won the ICLR 2023 outstanding paper award, becoming the first paper from South Korea that received the best paper award at major machine learning conferences.

Older

Jan 2023: I gave an invited presentation on Tokenized Graph Transformer at a reading group of Microsoft USA.
Jan 2023: Tokenized Graph Transformer was introduced in a Hugging Face 🤗 blog article on graph machine learning.
Jan 2023: Tokenized Graph Transformer was highlighted as one of the outstanding works on graph transformers in Michael Galkin’s review article on 2022’s graph machine learning.
Jan 2023: Visual Token Matching was accepted to ICLR 2023 as an oral presentation (notable-top-5%) after being ranked #1 in review ratings among the 4,966 submissions.
Jan 2023: I gave an invited presentation on Higher-order Transformers at Qualcomm Research Korea.
Aug 2022: I gave an invited presentation on Tokenized Graph Transformer at Learning on Graphs and Geometry Reading Group (LoGaG).

Research

* denotes equal contribution. Representative papers are highlighted.

Learning Symmetrization for Equivariance with Orbit Distance Minimization
Tien Dat Nguyen*, Jinwoo Kim*, Hongseok Yang, Seunghoon Hong
NeurIPS Workshop on Symmetry and Geometry in Neural Representations, 2023
paper / code / poster
Architecture agnostic equivariance for diverse symmetries such as Lorentz based on invariant theory.
Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
Jinwoo Kim, Tien Dat Nguyen, Ayhan Suleymanzade, Hyeokjun An, Seunghoon Hong
NeurIPS, 2023  (Spotlight Presentation)
paper / code / poster / slides (extended)
A method for endowing equivariance to arbitrary architectures that can repurpose a ViT to process graphs.
3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee
arXiv, 2023
preprint
Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
Donggyun Kim, Jinwoo Kim, Seongwoong Cho, Chong Luo, Seunghoon Hong
ICLR, 2023  (Outstanding Paper Award)
paper / code
A few-shot learner for any dense vision tasks that combines ViT and patch-level non-parametric matching.
Pure Transformers are Powerful Graph Learners
Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong
NeurIPS, 2022
paper / code / talk / poster / slides (extended)
A pure transformer devoid of graph-tailored modifications can be powerful for graph learning in theory and practice.
Transformers Meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong
NeurIPS, 2022
paper / code / poster
Equivariant Hypergraph Neural Networks
Jinwoo Kim, Saeyoon Oh, Sungjun Cho, Seunghoon Hong
ECCV, 2022
paper / code / poster / slides
Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs
Jinwoo Kim, Saeyoon Oh, Seunghoon Hong
NeurIPS, 2021
paper / code / poster / slides
A generalization of transformers to sets and (hyper)graphs based on higher-order permutation equivariance.
SetVAE: Learning Hierarchical Composition for Generative Modeling of Set-Structured Data
Jinwoo Kim*, Jaehoon Yoo*, Juho Lee, Seunghoon Hong
CVPR, 2021
paper / code / project page / poster / slides
Spontaneous Retinal Waves Can Generate Long-Range Horizontal Connectivity in Visual Cortex
Jinwoo Kim*, Min Song*, Jaeson Jang, Se-Bum Paik
The Journal of Neuroscience 40(34), 2020
paper / code

Experience

LG AI Research Fundamental Research Lab (FRL)
Research Intern, 2022 (Mentors: Moontae Lee and Honglak Lee)

Korea Advanced Institute of Science and Technology (KAIST)
Research Intern, 2020 (Mentors: Seunghoon Hong and Juho Lee)

Korea Advanced Institute of Science and Technology (KAIST)
Research Intern, 2018-2019 (Mentor: Se-Bum Paik)

Korea Advanced Institute of Science and Technology (KAIST)
Research Intern, 2017 (Mentor: Doyun Lee)

Honors

Outstanding Paper Award, ICLR 2023 (as a coauthor)
Silver Prize, Samsung Humantech Paper Award, 2023 (as a coauthor)
Recipient, Qualcomm Innovation Fellowship Korea, 2022
Recipient, KAIST Undergraduate Research Program Excellence Award, 2022 (as a mentor)
Recipient, Kwanjeong Education Foundation Scholarship, 2022-2023
Recipient, KAIST Engineering Innovator Award, 2020  (Top 5 in College of Engineering)
Recipient, National Science & Technology Scholarship, 2018-2020
Recipient, KAIST Alumni Fellowship, 2017-2020
Recipient, KAIST Presidental Fellowship, 2016-2020
Recipient, KAIST Dean's List, Spring 2016 / Fall 2016 / Spring 2018
Recipient, Hansung Scholarship for Gifted Students, 2015-2016

Invited Talks

Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
Nov 2023: Pohang University of Science and Technology (POSTECH)

Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
Aug 2023: KAIST-Samsung Electronics DS Division Exchange Meetup

Pure Transformers are Powerful Graph Learners
Jan 2023: Microsoft USA
Nov 2022: NeurIPS 2022 at KAIST
Aug 2022: Learning on Graphs and Geometry Reading Group (LoGaG)

Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs
Jan 2023: Qualcomm Korea
Jan 2022: KAIST AI Workshop 21/22
Dec 2021: NeurIPS Social: ML in Korea

SetVAE: Learning Hierarchical Composition for Generative Modeling of Set-Structured Data
Sep 2021: Naver AI Author Meetup
Sep 2021: Korean Conference on Computer Vision (KCCV)

Spontaneous Retinal Waves Can Generate Long-Range Horizontal Connectivity in Visual Cortex
Oct 2019: Society for Neuroscience (SfN)

Academic Services

Conference Reviewer, NeurIPS 2022 / 2023, ICML 2023, CVPR 2022, LoG 2022 / 2023, ACCV 2022
Journal Reviewer, Neural Networks 2023

Teaching

Teaching Assistant, Computer Vision (CS576), Spring 2022 / 2023
Teaching Assistant, Introduction to Deep Learning (CS492I / CS371), Fall 2021 / 2022 / 2023
Teaching Assistant, Samsung Research AI Expert Program, Summer 2021 / 2022 / 2023
Teaching Assistant, Undergraduate Research Program (URP), Spring 2022  (Excellence Award)
Teaching Assistant, School of Computing Colloquium (CS966 / CS986), Spring 2021

Music

I love listening to and making music! My favorite musicians include Lamp, Radiohead, and Ryuichi Sakamoto. Below are some of my original compositions:


Last updated: Mar 2024


Built from Jon Barron's academic website