MAEDA Lab
MAEDA Lab
INTELLIGENT & INDUSTRIAL ROBOTICS
Maeda Lab: 2021–2022
Div. of Systems Research, Faculty of Engineering / Specialization in Mechanical Engineering, Dept. of
Mechanical Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering Science /
Interfaculty Graduate School of Innovative and Practical Studies (Applied AI) /
Dept. of Mechanical Engineering, Materials Science, and Ocean Engineering, College of Engineering Science,
Yokohama National University
Mechanical Engineering and Materials Science Bldg. (N6-5),
79-5 Tokiwadai, Hodogaya-ku, Yokohama, 240-8501 JAPAN
Tel/Fax +81-45-339-3918 (Prof. Maeda)/+81-45-339-3894 (Lab)
E-mail maeda[at]ynu.ac.jp
https://iir.ynu.ac.jp/
People (2021–2022 Academic Year)
Dr. Yusuke MAEDA (Professor, Div. of Systems Research, Fac. of Engineering)
Doctoral Students (Specialization in Mechanical Engineering, Dept. of Mechanical
Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering
Science)
Yao DENG
Reiko TAKAHASHI (JSPS Research Fellow)
Master’s Students (Specialization in Mechanical Engineering, Dept. of Mechanical
Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering
Science/Interfaculty Graduate School of Innovative and Practical Studies)
Suneet SHUKLA
Hiroyuki IHARA
Yasuaki TANAKA
Takuya NAKATSUKA
Yukari HIRAKI
Qian LI
Akitoshi SAKATA
Akihide SUGA
Naruya SUZUKI
Kenta TAKAHASHI
Yuta NAKANISHI
Yoshiki TAHARA
Undergraduate Students (Mechanical Engineering Program, Dept. of Mechanical
Engineering and Materials Science/Dept. of Mechanical Engineering, Materials Science,
and Ocean Engineering, College of Engineering Science)
Dan KOBAYASHI
Haruki KAMIKUKITA
Hirotaka KONDO
Kenta SAKAKI
Mizuki SHONO
2021–2022 Maeda Lab, Yokohama National University
SLAM-Integrated Kinematic Calibration (SKCLAM)
SLAM (Simultaneous Localization and Mapping) techniques can be applied to industrial manip-
ulators for 3D mapping around them and calibration of their kinematic parameters. We call this
“SKCLAM” (Simultaneous Kinematic Calibration, Localization and Mapping). Using an RGB-
D camera attached to the end-effector of a manipulator (Fig. 1), we demonstrated successful
SKCLAM in a virtual environment (Fig. 2) and a real environment (Fig. 3) [1][2]. We are also
studying SKCLAM with spherical cameras [3] and stereo cameras [4].
References
[1] J. Li, A. Ito, H. Yaguchi and Y. Maeda: Simultaneous kinematic calibration, localization, and mapping
(SKCLAM) for industrial robot manipulators, Advanced Robotics, Vol. 33, No. 23, pp. 1225–1234, 2019.
[2] A. Ito, J. Li and Y. Maeda: SLAM-Integrated Kinematic Calibration Using Checkerboard Patterns, Proc.
of 2020 IEEE/SICE Int. Symp. on System Integration (SII 2020), pp. 551–556, 2020.
[3] Y. Tanaka, J. Li, A. Ito and Y. Maeda: SLAM-Integrated Kinematic Calibration with Spherical Cameras
for Industrial Manipulators, Proc. of JSME Conf. on Robotics and Mechatronics 2020 (ROBOMECH
2020), 2P2-B05, 2020 (in Japanese).
[4] Y. Nagatomo, J. Li, Y. Tanaka and Y. Maeda: SLAM-integrated Kinematic Calibration with a Stereo
Camera for Industrial Robots, Proc. of JSME Conf. of Manufacturing Systems Division 2021, pp. 77–
78, 2021 (in Japanese).
Fig. 1 Manipulator Equipped
with an RGB-D Camera
Fig. 2 SKCLAM in Virtual Environment
Fig. 3 Example of an Obtained 3D Map
2
2021–2022 Maeda Lab, Yokohama National University
Robot Teaching
Teaching is indispensable for current industrial robots to execute tasks. Human operators have to
teach motions in detail to robots by, for example, conventional teaching/playback. However, robot
teaching is complicated and time-consuming for novice operators and the cost for training them
is often unaffordable in small-sized companies. Thus we are studying easy robot programming
methods toward the dissemination of robot utilization.
Robot programming with manual volume sweeping We developed a robot programming
method for part handling [1][2]. In this method, a human operator makes a robot manipulator
sweep a volume by its bodies (Fig. 4). The swept volume stands for (a part of) the manipulator’s
free space, because the manipulator has passed through the volume without collisions. Next,
the obtained swept volume is used by a motion planner to generate a well-optimized path of the
manipulator automatically. The swept volume can be displayed with Augmented Reality (AR)
so that human operators can easily understand it, which leads to efficient robot programming
[3] (Fig. 5).
References
[1] Y. Maeda, T. Ushioda and S. Makita: Easy Robot Programming for Industrial Manipulators by Manual
Volume Sweeping, Proc. of 2008 IEEE Int. Conf. on Robotics and Automation (ICRA 2008), pp. 2234–
2239, 2008.
[2] S. Ishii and Y. Maeda: Programming of Robots Based on Online Computation of Their Swept Vol-
umes, Proc. of 23rd IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2014),
pp. 385–390, 2014.
[3] Y. Sarai and Y. Maeda: Robot Programming for Manipulators through Volume Sweeping and Augmented
Reality, Proc. of 13th IEEE Conf. on Automation Science and Engineering (CASE 2017), pp. 302–307,
2017.
Manual Volume Sweeping
Swept Volume as a Part of Free Space Motion Planning within Swept Volume
(a) Programming Overview
(b) Manual Volume Sweeping
Fig. 4 Robot Programming by Manual Volume Sweeping
Fig. 5 AR Display of Swept Volume and Planned Path
3
2021–2022 Maeda Lab, Yokohama National University
View-Based Teaching/Playback
We developed a teaching/playback method based on camera images for industrial manipulators
[1][2]. In this method, robot motions and scene images in human demonstrations are recorded to
obtain an image-to-motion mapping, and the mapping is used for playback (Fig. 6). It can achieve
more robustness against changes of task conditions than conventional joint-variable-based teach-
ing/playback. Our method adopts end-to-end learning through view-based image processing and
therefore neither object models nor camera calibration are necessary. We are improving our view-
based teaching/playback by using range images (Fig. 7) and occlusion-aware techniques for more
robustness [3]. For application to force-control tasks, visualization of force information based on
photoelasticity (Fig. 8) is under investigation [4]. We are also trying to integrate reinforcement
learning with the view-based teaching/playback for reduction of human operations for teaching
[5].
References
[1] Y. Maeda and T. Nakamura: View-based teaching/playback for robotic manipulation, ROBOMECH J.,
Vol. 2, 2, 2015.
[2] Y. Maeda and Y. Moriyama: View-Based Teaching/Playback for Industrial Manipulators, Proc. of 2011
IEEE Int. Conf. on Robotics and Automation (ICRA 2011), pp. 4306–4311, 2011.
[3] Y. Maeda and Y. Saito: Lighting- and Occlusion-robust View-based Teaching/Playback for Model-free
Robot Programming, W. Chen et al. eds., Intelligent Autonomous Systems 14, pp. 939–952, Springer,
2017.
[4] Y. Nakagawa, Y. Maeda and S. Ishii: View-Based Teaching/Playback with Photoelasticity for Force-
Control Tasks, W. Chen et al. eds., Intelligent Autonomous Systems 14, pp. 825–837, Springer, 2017.
[5] Y. Maeda and R. Aburata: Teaching and Reinforcement Learning of Robotic View-Based Manipulation,
Proc. of 22nd IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2013), pp.
87–92, 2013.
camera
robot
object
human instruction
(a) human teaching
mapping
image
robot
motion
(b) image-to-motion mapping
camera
robot
object
mapping
(c) view-based playback
Fig. 6 Outline of View-Based Teaching/Playback
0 100 200 300 400 500
frame number
Grayscale
Images
Light
OFF
Range
Images
Fig. 7 Switching between Grayscale and Range
Images for View-Based Teaching/Playback
Fig. 8 View-based Teaching/
Playback with Photoelasticity
4