Jaechan Lee jclee@rcv.sejong.ac.kr

I am an MS/PhD student in AI Robotics at the Robotics and Computer Vision Lab, Sejong University (Advisor: Prof. Yukyung Choi). My research focuses on language-guided robotic manipulation, particularly sub-task decomposition, manipulator action code and policy generation, and VA/VLA-based models, with complementary interests in open-vocabulary object detection, visual grounding, and affordance recognition. I aim to develop robotic systems that robustly connect high-level human intent with low-level actions through multimodal perception and reasoning.

๐Ÿซ Robotics and Computer Vision Lab, Sejong University (2024.01 ~)
๐ŸŽ“ MS/PhD: Department of Convergence Engineering for Artificial Intelligence, Major in AI Robotics, Sejong University (2025.03 ~)
๐ŸŽ“ BS: School of Intelligent Mechatronics Engineering, Sejong University (2019.03 ~ 2025.02)
Jaechan Lee
2025
Under-Review ยท 2025
Jaechan Lee, Seunghyeon Lee, Taejoo Kim and Yukyung Choiโ€ 
2025
The 20th Korea Robotics Society Annual Conference (KRoC) ยท Feb. 2025
Jaechan Lee, Taejoo Kim, Yukyung Choiโ€ 
2025
Method and Apparatus for Reconstructing Intent Based on Multi-modal Inverse Reasoning
๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์—ญ์ถ”๋ก  ๊ธฐ๋ฐ˜ ์˜๋„ ์žฌ๊ตฌ์„ฑ ๋ฐฉ๋ฒ• ๋ฐ ์žฅ์น˜ (10-2025-0210897) ยท ์ถœ์› ์™„๋ฃŒ ยท Dec. 2025
2024
์ž‘์—… ์ƒํ™ฉ ์ดํ•ด ๋ฐ ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•œ ๊ฑฐ๋Œ€ ์ธ๊ณต์ง€๋Šฅ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๋กœ๋ด‡ ์ž‘์—… ํ•™์Šต ๊ธฐ์ˆ  ๊ฐœ๋ฐœ
Korea Planning & Evaluation Institute of Industrial Technology (KEIT) ยท Sep. 2024 - Feb. 2028 (์˜ˆ์ •)
Development of robotic manipulation task learning based on Foundation model to understand and reason about task situations
2024
ํ”„๋กฌํ”„ํŠธ ๋ฐ ์ƒํ™ฉ๋ณ„ ๋ฏธํ•™์Šต ๋ฌผ์ฒด ์ธ์‹๊ธฐ์ˆ ๊ณผ ๊ทธ๋ฆฌํผ ์ž๊ฐ€ ๊ด€์ฐฐ์„ ํ†ตํ•œ ์ž„์˜ ๊ทธ๋ฆฌํผ ํ˜•์ƒ๋ถ„์„๊ธฐ์ˆ ์„ ํ†ตํ•ฉํ•œ ๋ฏธํ•™์Šต ๋ฌผ์ฒด ์กฐ์ž‘ ์ธ๊ณต์ง€๋Šฅ ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ
Korea Planning & Evaluation Institute of Industrial Technology (KEIT) ยท Apr. 2024 - Dec. 2028 (์˜ˆ์ •)
Development of artificial intelligence software for unseen object manipulation that integrates prompt and situation-specific unseen object recognition and arbitrary gripper shape analysis through gripper self-observation
Coming soon...