MAEDA Lab
MAEDA Lab
INTELLIGENT & INDUSTRIAL ROBOTICS
Maeda Lab: 2021–2022
Div. of Systems Research, Faculty of Engineering / Specialization in Mechanical Engineering, Dept. of
Mechanical Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering Science /
Interfaculty Graduate School of Innovative and Practical Studies (Applied AI) /
Dept. of Mechanical Engineering, Materials Science, and Ocean Engineering, College of Engineering Science,
Yokohama National University
Mechanical Engineering and Materials Science Bldg. (N6-5),
79-5 Tokiwadai, Hodogaya-ku, Yokohama, 240-8501 JAPAN
Tel/Fax +81-45-339-3918 (Prof. Maeda)/+81-45-339-3894 (Lab)
E-mail maeda[at]ynu.ac.jp
https://iir.ynu.ac.jp/
People (2021–2022 Academic Year)
Dr. Yusuke MAEDA (Professor, Div. of Systems Research, Fac. of Engineering)
Doctoral Students (Specialization in Mechanical Engineering, Dept. of Mechanical
Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering
Science)
Yao DENG
Reiko TAKAHASHI (JSPS Research Fellow)
Master’s Students (Specialization in Mechanical Engineering, Dept. of Mechanical
Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering
Science/Interfaculty Graduate School of Innovative and Practical Studies)
Suneet SHUKLA
Hiroyuki IHARA
Yasuaki TANAKA
Takuya NAKATSUKA
Yukari HIRAKI
Qian LI
Akitoshi SAKATA
Akihide SUGA
Naruya SUZUKI
Kenta TAKAHASHI
Yuta NAKANISHI
Yoshiki TAHARA
Undergraduate Students (Mechanical Engineering Program, Dept. of Mechanical
Engineering and Materials Science/Dept. of Mechanical Engineering, Materials Science,
and Ocean Engineering, College of Engineering Science)
Dan KOBAYASHI
Haruki KAMIKUKITA
Hirotaka KONDO
Kenta SAKAKI
Mizuki SHONO
2021–2022 Maeda Lab, Yokohama National University
SLAM-Integrated Kinematic Calibration (SKCLAM)
SLAM (Simultaneous Localization and Mapping) techniques can be applied to industrial manip-
ulators for 3D mapping around them and calibration of their kinematic parameters. We call this
“SKCLAM” (Simultaneous Kinematic Calibration, Localization and Mapping). Using an RGB-
D camera attached to the end-effector of a manipulator (Fig. 1), we demonstrated successful
SKCLAM in a virtual environment (Fig. 2) and a real environment (Fig. 3) [1][2]. We are also
studying SKCLAM with spherical cameras [3] and stereo cameras [4].
References
[1] J. Li, A. Ito, H. Yaguchi and Y. Maeda: Simultaneous kinematic calibration, localization, and mapping
(SKCLAM) for industrial robot manipulators, Advanced Robotics, Vol. 33, No. 23, pp. 1225–1234, 2019.
[2] A. Ito, J. Li and Y. Maeda: SLAM-Integrated Kinematic Calibration Using Checkerboard Patterns, Proc.
of 2020 IEEE/SICE Int. Symp. on System Integration (SII 2020), pp. 551–556, 2020.
[3] Y. Tanaka, J. Li, A. Ito and Y. Maeda: SLAM-Integrated Kinematic Calibration with Spherical Cameras
for Industrial Manipulators, Proc. of JSME Conf. on Robotics and Mechatronics 2020 (ROBOMECH
2020), 2P2-B05, 2020 (in Japanese).
[4] Y. Nagatomo, J. Li, Y. Tanaka and Y. Maeda: SLAM-integrated Kinematic Calibration with a Stereo
Camera for Industrial Robots, Proc. of JSME Conf. of Manufacturing Systems Division 2021, pp. 77–
78, 2021 (in Japanese).
Fig. 1 Manipulator Equipped
with an RGB-D Camera
Fig. 2 SKCLAM in Virtual Environment
Fig. 3 Example of an Obtained 3D Map
2
2021–2022 Maeda Lab, Yokohama National University
Robot Teaching
Teaching is indispensable for current industrial robots to execute tasks. Human operators have to
teach motions in detail to robots by, for example, conventional teaching/playback. However, robot
teaching is complicated and time-consuming for novice operators and the cost for training them
is often unaffordable in small-sized companies. Thus we are studying easy robot programming
methods toward the dissemination of robot utilization.
Robot programming with manual volume sweeping We developed a robot programming
method for part handling [1][2]. In this method, a human operator makes a robot manipulator
sweep a volume by its bodies (Fig. 4). The swept volume stands for (a part of) the manipulator’s
free space, because the manipulator has passed through the volume without collisions. Next,
the obtained swept volume is used by a motion planner to generate a well-optimized path of the
manipulator automatically. The swept volume can be displayed with Augmented Reality (AR)
so that human operators can easily understand it, which leads to efficient robot programming
[3] (Fig. 5).
References
[1] Y. Maeda, T. Ushioda and S. Makita: Easy Robot Programming for Industrial Manipulators by Manual
Volume Sweeping, Proc. of 2008 IEEE Int. Conf. on Robotics and Automation (ICRA 2008), pp. 2234–
2239, 2008.
[2] S. Ishii and Y. Maeda: Programming of Robots Based on Online Computation of Their Swept Vol-
umes, Proc. of 23rd IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2014),
pp. 385–390, 2014.
[3] Y. Sarai and Y. Maeda: Robot Programming for Manipulators through Volume Sweeping and Augmented
Reality, Proc. of 13th IEEE Conf. on Automation Science and Engineering (CASE 2017), pp. 302–307,
2017.
Manual Volume Sweeping
Swept Volume as a Part of Free Space Motion Planning within Swept Volume
(a) Programming Overview
(b) Manual Volume Sweeping
Fig. 4 Robot Programming by Manual Volume Sweeping
Fig. 5 AR Display of Swept Volume and Planned Path
3
2021–2022 Maeda Lab, Yokohama National University
View-Based Teaching/Playback
We developed a teaching/playback method based on camera images for industrial manipulators
[1][2]. In this method, robot motions and scene images in human demonstrations are recorded to
obtain an image-to-motion mapping, and the mapping is used for playback (Fig. 6). It can achieve
more robustness against changes of task conditions than conventional joint-variable-based teach-
ing/playback. Our method adopts end-to-end learning through view-based image processing and
therefore neither object models nor camera calibration are necessary. We are improving our view-
based teaching/playback by using range images (Fig. 7) and occlusion-aware techniques for more
robustness [3]. For application to force-control tasks, visualization of force information based on
photoelasticity (Fig. 8) is under investigation [4]. We are also trying to integrate reinforcement
learning with the view-based teaching/playback for reduction of human operations for teaching
[5].
References
[1] Y. Maeda and T. Nakamura: View-based teaching/playback for robotic manipulation, ROBOMECH J.,
Vol. 2, 2, 2015.
[2] Y. Maeda and Y. Moriyama: View-Based Teaching/Playback for Industrial Manipulators, Proc. of 2011
IEEE Int. Conf. on Robotics and Automation (ICRA 2011), pp. 4306–4311, 2011.
[3] Y. Maeda and Y. Saito: Lighting- and Occlusion-robust View-based Teaching/Playback for Model-free
Robot Programming, W. Chen et al. eds., Intelligent Autonomous Systems 14, pp. 939–952, Springer,
2017.
[4] Y. Nakagawa, Y. Maeda and S. Ishii: View-Based Teaching/Playback with Photoelasticity for Force-
Control Tasks, W. Chen et al. eds., Intelligent Autonomous Systems 14, pp. 825–837, Springer, 2017.
[5] Y. Maeda and R. Aburata: Teaching and Reinforcement Learning of Robotic View-Based Manipulation,
Proc. of 22nd IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2013), pp.
87–92, 2013.
camera
robot
object
human instruction
(a) human teaching
mapping
image
robot
motion
(b) image-to-motion mapping
camera
robot
object
mapping
(c) view-based playback
Fig. 6 Outline of View-Based Teaching/Playback
0 100 200 300 400 500
frame number
Grayscale
Images
Light
OFF
Range
Images
Fig. 7 Switching between Grayscale and Range
Images for View-Based Teaching/Playback
Fig. 8 View-based Teaching/
Playback with Photoelasticity
4
2021–2022 Maeda Lab, Yokohama National University
Caging and Caging-based Grasping
Caging is a method to constrain objects geometrically so that they cannot escape from a “cage”
constituted of robot bodies.
3D multifingered caging While most of related studies deal with planar caging, we study three-
dimensional caging by multifingered robot hands (Fig. 9). Caging does not require force con-
trol, and therefore it is well-suited to current robotic devices and contributes to provide a variety
of options of robotic manipulation. We are investigating sufficient conditions for 3D multi-
fingered caging and developing an algorithm to plan hand motions for caging based on the
conditions [1]. Robot motions generated by the developed planning algorithm were validated
on an arm-hand system [2] (Fig. 10).
Caging-based Grasping Position-controlled robot hands can capture an object and manipulate
it via caging without force sensing or force control. However, the object in caging is movable in
the closed region, which is not allowed in some applications. In such cases, grasping is required.
We proposed a new simple approach to grasping by position-controlled robot hands: caging-
based grasping by robot fingers with rigid parts and outer soft parts. In caging-based grasping,
we cage an object with the rigid parts of a robot hand, and construct a complete grasp with
the soft parts of the hand. We are studying the formal definition of the caging-based grasping
and concrete conditions for caging-based grasping in planar and spatial cases. Based on the
derived conditions, we demonstrated planar caging-based grasping by mobile robots and spatial
caging-based grasping by a multifingered hand (Fig. 11) [3][4]. We also extend the theory of
caging-based grasping so that it can deal with deformable objects (Fig. 12) ([5]).
References
[1] S. Makita and Y. Maeda: 3D Multifingered Caging: Basic Formulation and Planning, Proc. of 2008
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2008), pp. 2697–2702, 2008.
[2] S. Makita, K. Okita and Y. Maeda: 3D Two-Fingered Caging for Two Types of Objects: Sufficient
Conditions and Planning, Int. J. of Mechatronics and Automation, Vol. 3, No. 4, pp. 263–277, 2013.
[3] Y. Maeda, N. Kodera and T. Egawa: Caging-Based Grasping by a Robot Hand with Rigid and Soft Parts,
Proc. of 2012 IEEE Int. Conf. on Robotics and Automation (ICRA 2012), pp. 5150–5155, 2012.
[4] T. Egawa, Y. Maeda and H. Tsuruga: Two- and Three-dimensional Caging-Based Grasping of Objects
of Various Shapes with Circular Robots and Multi-Fingered Hands, Proc. of 41st Ann. Conf. of IEEE
Industrial Electronics Soc. (IECON 2015), pp. 643–648, 2015.
[5] D. Kim, Y. Maeda and S. Komiyama: Caging-based Grasping of Deformable Objects for Geometry-
based Robotic Manipulation
, ROBOMECH J., Vol. 6, 3, 2019.
Fig. 9 3D Multifin-
gered Caging
Fig. 10 Caging of a
Sphere
Fig. 11 Caging-
based Grasping by a
Multifingered Hand
Fig. 12 Caging-
based Grasping of a
Deformable Object
5
2021–2022 Maeda Lab, Yokohama National University
Caging Manipulation
Caging is a method to make an object inescapable from a closed region geometrically. We study
robotic manipulation with caging, or “caging manipulation.
In-Hand Caging Manipulation Pose of objects caged in robot hands can be controlled to some
extent by changing hand configurations. We call it “in-hand caging manipulation. It enables
position-controlled robot hands to perform robust in-hand manipulation. In-hand caging ma-
nipulation was successfully tested on actual robot hands (Fig. 13). A planning algorithm for
in-hand caging manipulation was developed [1][2]. We are also studying various forms of in-
hand caging manipulation [3].
Cooperative Caging Manipulation The object is not fully constrained in caging. This nature
enables cooperative manipulation based on position control without excessive internal forces.
We study dual-arm cooperative manipulation of long objects with caging or caging-based grasp-
ing (Fig. 14) [4]. It does not require force control, and can deal with a variety of objects by
using appropriate end-effectors.
References
[1] Y. Maeda and T. Asamura: Sensorless In-hand Caging Manipulation, W. Chen et al. eds., Intelligent
Autonomous Systems 14, pp. 255–267, Springer, 2017.
[2] S. Komiyama and Y. Maeda: Position and Orientation Control of Polygonal Objects by Sensorless In-
hand Caging Manipulation, Proc. of IEEE Int. Conf. on Robotics and Automation (ICRA 2021), 2021
(to appear).
[3] Y. Maeda, T. Asamura, T. Egawa and Y. Kurata: Geometry-Based Manipulation through Robotic Caging,
IEEE/RSJ IROS 2014 Workshop on Robot Manipulation: What has been achieved and what remains to
be done?, 2014.
[4] Y. Hiraki and Y. Maeda: Caging-based Dual-arm Cooperation without Force Control, Proc. of JSME
Conf. on Robotics and Mechatronics 2020 (ROBOMECH 2020), 2A1-M05, 2020 (in Japanese).
Fig. 13 In-Hand Caging Manipulation
(a) wire harness
(b) long pipe
Fig. 14 Dual-arm Cooperative Manipulation with Caging
6
2021–2022 Maeda Lab, Yokohama National University
Mechanical Analysis of Robotic Manipulation
Manipulation is one of the most fundamental topics of robotics. We study robotic manipulation
toward achievement of human-like dexterity of robots.
Especially we study contact mechanics of graspless manipulation (manipulation without grasp-
ing, Fig. 15) and power grasp (grasping with not only fingertips but also other finger surfaces and
palm, Fig. 16) [1][2]. Based on the contact mechanics, we analyzed the robustness [3] (Fig. 17)
and internal forces of robotic manipulation [4]. We also proposed a method for joint torque opti-
mization for robotic manipulation [5].
References
[1] Y. Maeda, K. Oda and S. Makita: Analysis of Indeterminate Contact Forces in Robotic Grasping and
Contact Tasks, Proc. of 2007 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2007),
pp. 1570–1575, 2007.
[2] Y. Maeda, Y. Goto and S. Makita: A New Formulation for Indeterminate Contact Forces in Rigid-body
Statics, Proc. of 2009 IEEE Int. Symp. on Assembly and Manufacturing (ISAM 2009), pp. 298–303,
2009.
[3] Y. Maeda and S. Makita: A Quantitative Test for the Robustness of Graspless Manipulation, Proc. of
2006 IEEE Int. Conf. on Robotics and Automation (ICRA 2006), pp. 1743–1748, 2006.
[4] Y. Maeda: On the Possibility of Excessive Internal Forces on Manipulated Objects in Robotic Contact
Tasks, Proc. of 2005 IEEE Int. Conf. on Robotics and Automation (ICRA 2005), pp. 1953–1958, 2005.
[5] S. Makita and Y. Maeda: Joint Torque Optimization for Quasi-Static Graspless Manipulation, Proc. of
2013 IEEE Int. Conf. on Robotics and Automation (ICRA 2013), pp. 3715–3720, 2013.
sliding
tumbling
pivoting
pushing
sliding
tumbling
pivoting
pushing
Fig. 15 Graspless Manipulation
Fig. 16 Mechanical Analysis
of Power Grasp
robust robust robust
not robust
robustrobust robustrobust robustrobust
not robust
Fig. 17 Robustness of Pushing
7
2021–2022 Maeda Lab, Yokohama National University
Handling of Various Objects by Robots
Techniques for robotic manipulation of a variety of objects are under investigation.
Vision-Based Object Picking Robotic bin-picking is more flexible and versatile than the use
of conventional part feeders, and therefore it is effective for low-volume production. Many
bin-picking techniques have been proposed, and some of them are in actual use. However, it
is difficult to apply these existing techniques to coil springs, due to their shape characteristics.
Thus we developed a dedicated method to recognize and localize coil springs in a pile, which
enabled robotic bin-picking of coil springs (Fig. 18) [1]. Additionally we are developing an
impacting-based method to detect unknown objects for picking (Fig. 19) [2].
3D Block Printing We developed a robotic 3D printer: a robot system that can assemble toy
brick sculptures from their 3D CAD models [3][4][5]. In this system, a 3D CAD model is auto-
matically converted to a block model consisting of primitive toy blocks. Then an assembly plan
of the block model is automatically generated, if feasible. According to the plan, an industrial
robot assembles a brick sculpture layer by layer from bottom to top. We demonstrate successful
assembly of several brick sculptures (Fig. 20).
References
[1] K. Ono, T. Ogawa, Y. Maeda, S. Nakatani, G. Nagayasu, R. Shimizu and N. Ouchi: Detection, Lo-
calization and Picking Up of Coil Springs from a Pile, Proc. of 2014 IEEE Int. Conf. on Robotics and
Automation (ICRA 2014), pp. 3477–3482, 2014.
[2] Y. Maeda, H. Tsuruga, H. Honda and S. Hirono: Unknown Object Detection by Punching: An Impacting-
based Approach to Picking Novel Objects, M. Strand et al. eds., Intelligent Autonomous Systems 15,
pp. 668–678, Springer, 2018.
[3] Y. Maeda, O. Nakano, T. Maekawa and S. Maruo: From CAD Models to Toy Brick Sculptures: A
3D Block Printer, Proc. of 2016 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2016),
pp. 2167–2172, 2016.
[4] C. Sugimoto, Y. Maeda, T. Maekawa and S. Maruo: A 3D Block Printer Using Toy Bricks for Various
Models, Proc. of 13th IEEE Conf. on Automation Science and Engineering (CASE 2017), pp. 958–963,
2017.
[5] M. Kohama, C. Sugimoto, O. Nakano and Y. Maeda: Robotic Additive Manufacturing with Toy Blocks,
IISE Trans., Vol. 53, No. 3, pp. 273–284, 2021.
Fig. 18 Bin-Picking of Coil Springs Fig. 19 Impacting-based Picking
Fig. 20 3D Block Printing
8
2021–2022 Maeda Lab, Yokohama National University
Modeling and Measurement of Human Hands and Their Dex-
terity
The theory of robotic manipulation can be applied to analysis of human hands and their dexterity.
Understanding of human dexterity is very important to implement high dexterity on robots. We
are conducting some studies on modeling of human hands and skills jointly with Digital Human
Research Team, AIST.
Digital Hands We are developing a method for generating models of human hands that have
links and skins (Fig. 21) to represent their motion using motion capture [1]. The applications of
the hand models include modeling range of motion with subjective discomfort in human hands
[2].
Grasp Measurement and Synthesis Digital hands can be used to synthesize grasps for sup-
porting ergonomic product design (Fig. 22) [3]. We also study a simple grasp measurement
device (Fig. 23) [4] and grasp synthesis for ROM-limited hands [5].
References
[1] N. Miyata, Y. Shimizu, Y. Motoki, Y. Maeda and M. Mochimaru: Hand MoCAP by Building an Indi-
vidual Hand Model, Int. J. of Human Factors Modelling and Simulation, Vol. 3, No. 2, pp. 147–168,
2012.
[2] N. Miyata, Y. Yoneoka and Y. Maeda: Modeling the Range of Motion and the Degree of Posture Discom-
fort of the Thumb Joints, S. Bagnara et al. eds., Proceedings of the 20th Congress of the International
Ergonomics Association (IEA 2018), Volume V: Human Simulation and Virtual Environments, Work
With Computing Systems (WWCS), Process Control, pp. 324–329, Springer, 2018.
[3] T. Hirono, N. Miyata and Y. Maeda: Grasp Synthesis for Variously-Sized Hands Using a Grasp Database
That Covers Variation of Contact Region, Proc. of 3rd Int. Digital Human Modeling Symp. (DHM 2014),
11, 2014.
[4] N. Miyata, K. Honoki, Y. Maeda, Y. Endo, M. Tada and Y. Sugiura: Wrap & Sense: Grasp Capture by
a Band Sensor, Adjunct Proc. of 29th Annual Symp. on User Interface Software and Technology (UIST
2016), pp. 87–89, 2016.
[5] R. Takahashi, N. Miyata, Y. Maeda and K. Fujita: Grasp Synthesis for Digital Hands with Limited Range
of Motion in Their Thumb Joints, Proc. of 2019 IEEE Int. Conf. on Systems, Man and Cybernetics (SMC
2019), pp. 191–196, 2019.
Fig. 21 Model of a
Human Hand
Robust
Gracile
LongShort
Fig. 22 Grasp Synthesis for
Various Hands
Fig. 23 “Wrap & Sense”
9
2021–2022 Maeda Lab, Yokohama National University
Application of Robot Technology to Human Activity Support
Robot technology should be applied to various fields to support human activities. For example,
home appliances would be robotized more and more to help our daily life intelligently and ef-
fectively. We have a proposal on smart dishwashers: our proposed system can support users’
dishwasher loading [1][2][3]. This system can recognize dishes from a picture of a dining table
after a meal. Then the system calculates the optimal placement of the recognized dishes in the
dishwasher and presents the result to users as 3D graphics (Fig. 24).
We are also developing a support system for human origami folding [4]. It is composed of an
origami simulator for design and display of origami folding processes (Fig. 25) and a cutting
plotter for adding crease pattern automatically. The system can be used in childhood education
and elderly care.
References
[1] Y. Kurata and Y. Maeda: Toward a Smart Dishwasher: A Support System for Optimizing Dishwasher
Loading, IPSJ SIG Technical Report, Vol. 2016-CDS-16, No. 9, 2016 (in Japanese).
[2] K. Imai and Y. Maeda: A User Support System That Optimizes Dishwasher Loading, Proc. of 2017 IEEE
6th Global Conf. on Consumer Electronics (GCCE 2017), pp. 523–524, 2017.
[3] Y. Ogawa and Y. Maeda: Support of Dishwasher Loading by Counting the Number of Dishes with Image
Processing, Proc. of JSME Conf. on Robotics and Mechatronics 2018 (ROBOMECH 2018), 2A2-J17,
2018 (in Japanese).
[4] Y. Nakajima and Y. Maeda: An Origami Support System by Automated Crease Addition and Folding
Process Display, Prof. of 38th RSJ Annual Conf., RSJ2020AC3J1-03, 2020 (in Japanese).
(a) Calculated Result (b) Loaded Dishes
Fig. 24 Optimized Dish Loading Fig. 25 Origami Simulator
10