Seongyong Koo (Koosy)Ph.D. Address:
Rheinische
Friedrich-Wilhelms-Universität Bonn Tel:
+49 (0) 228 73-54160
Email: koosy _at_ ais.uni-bonn.de Office: I.09 |
|
4.2015 – now
Senior scientist in Autonomous Intelligent Systems Group, Universität Bonn
5.2014 – 3.2015
Postdoctoral researcher in Dynamic Human-Robot Interaction Group, Technische Universität München
2.2009 – 2.2014
PhD in Telerobotics and Control Laboratory, KAIST, Daejeon, Korea
9.2011 – 2.2013
Visiting Researcher in Cluster of Excellence Cognition for Technical Systems, Technische Universität München
7.2009 – 8.2009
Visiting Researcher in Chair of Automatic Control Engineering, Technische Universität München
3.2007 – 2.2009
M.S. in Robotics Prgram, KAIST, Daejeon, Korea
3.2001 – 2.2007
B.S. in School of Mechanical Engineering, Hanyang University, Seoul, Korea
2015 in Uni-Bonn
MA-INF 4308 Lab Vision Systems: Learning Computer Vision on GPUs
MA-INF 4208 Seminar Vision Systems
2014 in TUM
WS2015/15 Dynamic Human-Robot Interaction: Visual tracking in human-robot interaction (Guest lecture)
2014 in KAIST
Human-Robot Interaction: Visual tracking in human-robot interaction (Guest lecture)
Google Scholar, number of citations: (querying), h-index: (querying) , i10-index: (querying)
Seongyong Koo, Grzegorz Ficht, Germán Martín García, Dmytro
Pavlichenko, Martin Raak, and Sven Behnke:
Robolink Feeder:
Reconfigurable Bin-Picking and Feeding with a Lightweight Cable-Driven
Manipulator
Accepted
for 13th IEEE International Conference on Automation Science and
Engineering (CASE), Xi'an, China, to appear August 2017.
[Video]
Nikita Araslanov, Seongyong Koo, Juergen Gall, and Sven Behnke:
Efficient Single-View 3D Co-segmentation Using Shape Similarity and Spatial Part Relations
In: Rosenhahn B., Andres B. (eds) Pattern Recognition. GCPR 2016. Lecture Notes in Computer Science, vol 9796. Springer, Cham.
Seongyong Koo and Sven Behnke:
Focused On-line
Visual-Motor Coordination for a Dual-arm Robot
Manipulator
In Proceedings of IEEE International Conference on Robotics and
Automation
(ICRA), pp. 1579-1586, Stockholm, Sweden, May 2016.
[Video] [Source code]
Shile Li, Seongyong Koo, and Dongheui Lee:
Real-time and Model-free Object Tracking using Particle Filter with Joint Color-Spatial Descriptor
In proceedings of 2015 IEEE/RSJ International Conferene of Intelligent Robots and Systems (IROS 2015) at Hamburg, Germany, Sep. 2015.
Seongyong Koo, Dongheui Lee, and Dong-Soo Kwon:
Unsupervised object individuation from RGB-D image sequences
In proceedings of 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014) at Chicago, IL, USA, Nov. 2014, pp. 4450-4457.
Seongyong Koo, Dongheui Lee, and Dong-Soo Kwon:
Incremental object learning and robust tracking of multiple objects from RGB-D point set data
Journal of Visual Communication and Image Representation (JVCI), Vol. 25, No. 1, pp. 108-121, 2014.
Seongyong Koo, Dongheui Lee, and Dong-Soo Kwon:
Multiple Object Tracking Using an RGB-D Camera by Hierarchical Spatiotemporal Data Association
In proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013) at Tokyo, Japan, Nov. 2013, pp. 1113-1118.
Seongyong Koo and Dong-Soo Kwon:
Multiple People Tracking from 2D Depth Data by Deterministic Spatiotemporal Data Association
In proceedings of 2013 IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2013) at Gyeongju, Korea, Aug. 2013, pp. 656-661.
Seongyong Koo, Dongheui Lee, and Dong-Soo Kwon:
GMM-based 3D Object Representation and Robust Tracking in Unconstructed Dynamic Environments
In proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA 2013) at Karlsruhe, Germany, May 2013, pp. 1106-1113.
Jaewoo Kim, Seong-Yong Koo, and Dong-Soo Kwon:
A hierarchical model of expressive robot motion for effective gesture editing
In proceedings of 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2011) at Incheon, Korea, Nov. 2011, pp. 23-26.
Seong-Yong Koo, Kiru Park, Dong-Soo Kwon:
A Dual-layer User Model based Cognitive System for User-Adaptive Service Robots
In proceedings of 20th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2011) at Atlanta, GA, USA, July 2011, pp. 59-64.
Young-Min Kim, Seong-Yong Koo, Jong Gwan Lim and Dong-Soo Kwon:
A Robust Online Touch Recognition for Dynamic Human-robot Interaction
IEEE Transactions on Consumer Electronics, Vol. 56, No. 3, pp.1979-1987, 2010.
Oh-Hun Kwon, Seong-Yong Koo, Young-Geun Kim, Dong-Soo Kwon:
Telepresence Robot System for English Tutoring
In proceedings of 6th IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO 2010) at Seoul, Korea, Oct. 2010, pp. 152-155.
Jong-Chan Park, Hyunsoo Song, Seongyong Koo, Young-Min Kim, Dong-Soo Kwon:
Robot’s Behavior Expressions according to the Sentence Types and Emotions with Modification by Personality
In proceedings of 6th IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO 2010) at Seoul, Korea, Oct. 2010, pp. 105-110.
Hyoung-Rock Kim, Seong-Yong Koo, and Dong-Soo Kwon:
Designing Reactive Emotion Generation Model for Interactive Robots
In proceedings of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010) at Taipei, Taiwan, Oct. 2010, pp. 2270-2275. [JTCF Novel Technology Paper Award for Amusement Culture]
Oh-Hun Kwon, Seong-Yong Koo, Nam-Su Yuk, and Dong-Soo Kwon:
Two-Wheel Mobile Platform with Active Assistant Wheels for Expressive Service Robot
In proceedings of 6th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2009) at Gwangju, Korea, Oct. 2009, pp. 152-155.
Seung-Chan Kim, Seongyong Koo, and Dong-Soo Kwon:
Midas Touch - Chunking Information on a Robotic User Interface using Spatial and Functional Metaphor
In proceedings of 18th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2009) at Toyama, Japan, Sep. 2009, pp. 967-972.
Seongyong Koo and Dong-Soo Kwon:
Recognizing Human Intentional Actions from the Relative Movements between Human and Robot
In proceedings of 18th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2009) at Toyama, Japan, Sep. 2009, pp 939-944.
Seong-yong Koo, Jong-gwan Lim, and Dong-soo Kwon:
Online Touch Behavior Recognition of Hard-cover Robot Using Temporal Decision Tree Classifier
In proceedings of 17th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2008) at Munich, Germany, Aug. 2008, pp.425-429.
Seung-Chan Kim, Seong-Yong Koo, Seungyon-Seny Lee, and Dong-Soo Kwon:
Rubby; Two Degree-of-Freedom Interface with Haptic and Auditory Feedback
In proceedings of 4th International Conference on Enactive Interfaces at Grenoble, France, Nov. 2007, pp.357-360.
Seong-Yong Koo and Dong-Soo Kwon:
Gomy : The Baby Bear-Like Robot for Emotion Human-Robot Interaction
In proceedings of 2007 International Conference on Ubiquitous Robots and Ambient Intelligent (URAI 2007) at Pohang, Korea, Nov. 2007, pp.603-608.
PhD dissertation:
Self-constructive multiple object detection and tracking in unstructured dynamic human environment
2014. 02, Mechanical Engineering, KAIST
Master dissertation:
Human Intention Recognition from the Relative Movement between Human and Robot for the Restaurant Service Robot
2009. 02, Robotics Program, KAIST
4.2015 – now
ALROMA: Autonomous Active Object Learning Through Robot Manipulation, funded by DFG Priority Programme "Autonomous Learning", Germany.
EuRoC: European Robotics Challenges. Challenge 1: Reconfigurable Interactive Manufacturing Cell, funded by European Union's Seventh Framework Programme.
[Video]
6.2014 – 3.2014
Learning and control of Dynamic Manipulation Tasks from Human Demonstrations, funded by DFG Project LE 2731/2-1, Germany
2.2013 – 2.2014
Development of a self-improving bidirectional sustainable HRI technology, Industrial Strategic Technology Development Program, funded by the Ministry of Knowledge Economy of Korea
2.2010 – 2.2011
Autonomous Agent of Future Robot Computer, funded by Ministry of Knowledge Economy and Korea Research Council for Industrial Science & Technology
8.2009 – 2.2010
Telepresence English Tutoring Robot, Co-work with Rastech corporation, funded by Ministry of knowledge economy of Korea
12.2008 – 8.2011
Robot Motion Editing Toolkit Software Development, 21st Century Frontier R&D Program funded by Ministry of Knowledge Economy of Korea
8.2007 – 2.2008
Touch Interface and Touch Pattern Recognition, 21st Century Frontier R&D Program funded by Ministry of Knowledge Economy of Korea
6.2007 – 5.2008
Restaurant Service Robot Development, funded by Human Resource Training Project for Regional innovation of Korea
Haptically generated apparatus and portable device having haptically generated apparatus
Korea patent (Patent No 10-1201250, 2012)
Haptically generated apparatus, portable device and recording medium thereof
Korea patent (Patent No 10- 1214853, 2012)
Method for embedding emotion onto media utilizing emotion of human or robot
Korea patent (Patent No 10- 1097358, 2011)
System for Clear formatting and extracting file location using user input and method therefor
Korea patent (Patent No 10-1097835, 2011)
Parallel 2 wheel running gear and the control method that can control posture and location separatively through variableness style assistance wheel
Korea patent (Patent No 10-1013936, 2011)
The system for recognizing of user touch pattern using touch sensor and accelerometer sensor
Korea patent (Patent No 10-0995130, 2010)
Handheld apparatus and system provided with shock-sensing module, and the method of executing command using the same
Korea patent (Patent No 10-0936485, 2010)
Universität Bonn, Institute for Computer Science, Departments: I, II, III, IV, V, VI