MAEDA Lab
MAEDA Lab
INTELLIGENT & INDUSTRIAL ROBOTICS
Maeda Lab: 2023–2024
Div. of Systems Research, Faculty of Engineering / Specialization in Mechanical Engineering, Dept. of
Mechanical Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering Science /
Interfaculty Graduate School of Innovative and Practical Studies /
Dept. of Mechanical Engineering, Materials Science, and Ocean Engineering, College of Engineering Science,
Yokohama National University
Mechanical Engineering and Materials Science Bldg. (N6-5),
79-5 Tokiwadai, Hodogaya-ku, Yokohama, 240-8501 JAPAN
Tel/Fax +81-45-339-3918 (Prof. Maeda)/+81-45-339-3894 (Lab)
E-mail maeda[at]ynu.ac.jp
https://iir.ynu.ac.jp/
People (2023–2024 Academic Year)
Dr. Yusuke MAEDA (Professor, Div. of Systems Research, Fac. of Engineering)
Master’s Students (Specialization in Mechanical Engineering, Dept. of Mechanical
Engineering, Materials Science, and Ocean Engineering, Graduate School of Engineering
Science/Interfaculty Graduate School of Innovative and Practical Studies)
Haruki KAMIKUKITA
Kenta SAKAKI
Mizuki SHONO
Itsuma MATSUI
Pedro SAMAN
Rohit THAKUR
Honoka OKUGUCHI
Naoya TAKAHASHI
Shuto YAMADA
Cheng WU
Undergraduate Students (Mechanical Engineering Program, Dept. of Mechanical
Engineering, Materials Science, and Ocean Engineering, College of Engineering Science)
Rion SATO
Kazuhiro KURIHARA
Masato KONISHI
Shoma SUGISAWA
Hiroki TABATA
2023–2024 Maeda Lab, Yokohama National University
SLAM-Integrated Kinematic Calibration (SKCLAM)
SLAM (Simultaneous Localization and Mapping) techniques can be applied to industrial manip-
ulators for 3D mapping around them and calibration of their kinematic parameters. We call this
“SKCLAM” (Simultaneous Kinematic Calibration, Localization and Mapping). Using an RGB-
D camera attached to the end-effector of a manipulator (Fig. 1), we demonstrated successful
SKCLAM in a virtual environment (Fig. 2) and a real environment (Fig. 3) [1][2]. We are also
studying SKCLAM with spherical cameras [3] and stereo cameras [4].
References
[1] J. Li, A. Ito, H. Yaguchi and Y. Maeda: Simultaneous kinematic calibration, localization, and mapping
(SKCLAM) for industrial robot manipulators, Advanced Robotics, Vol. 33, No. 23, pp. 1225–1234, 2019.
[2] A. Ito, J. Li and Y. Maeda: SLAM-Integrated Kinematic Calibration Using Checkerboard Patterns, Proc.
of 2020 IEEE/SICE Int. Symp. on System Integration (SII 2020), pp. 551–556, 2020.
[3] Y. Tanaka, J. Li, A. Ito and Y. Maeda: SLAM-Integrated Kinematic Calibration with Spherical Cameras
for Industrial Manipulators, Proc. of JSME Conf. on Robotics and Mechatronics 2020 (ROBOMECH
2020), 2P2-B05, 2020 (in Japanese).
[4] Y. Nagatomo, J. Li, Y. Tanaka and Y. Maeda: SLAM-integrated Kinematic Calibration with a Stereo
Camera for Industrial Robots, Proc. of JSME Conf. of Manufacturing Systems Division 2021, pp. 77–
78, 2021 (in Japanese).
Fig. 1 Manipulator Equipped
with an RGB-D Camera
Fig. 2 SKCLAM in Virtual Environment
Fig. 3 Example of an Obtained 3D Map
2
2023–2024 Maeda Lab, Yokohama National University
Robot Teaching
Teaching is indispensable for current industrial robots to execute tasks. Human operators have to
teach motions in detail to robots by, for example, conventional teaching/playback. However, robot
teaching is complicated and time-consuming for novice operators and the cost for training them
is often unaffordable in small-sized companies. Thus we are studying easy robot programming
methods toward the dissemination of robot utilization.
Robot programming with manual volume sweeping We developed a robot programming
method for part handling [1][2]. In this method, a human operator makes a robot manipulator
sweep a volume by its bodies. The swept volume stands for (a part of) the manipulator’s free
space, because the manipulator has passed through the volume without collisions. Next, the
obtained swept volume is used by a motion planner to generate a well-optimized path of the
manipulator automatically. The swept volume can be displayed with Augmented Reality (AR)
so that human operators can easily understand it, which leads to efficient robot programming
[3] (Fig. 4).
Assisting Online Robot Programming We are developing a support system for online robot
programming using an optical see-through AR device that can overlay useful information on a
real robot such as its movable area (Fig. 5). The system also supports the above robot program-
ming with manual volume sweeping [4]. Another support system for online robot programming
is also developed. In this system, it is possible to group and move existing teaching points,
and generate robot motions that connect the points. This is useful for adaptation to product
specification changes in robotic assembly [5].
References
[1] Y. Maeda, T. Ushioda and S. Makita: Easy Robot Programming for Industrial Manipulators by Manual
Volume Sweeping, Proc. of 2008 IEEE Int. Conf. on Robotics and Automation (ICRA 2008), pp. 2234–
2239, 2008.
[2] S. Ishii and Y. Maeda: Programming of Robots Based on Online Computation of Their Swept Vol-
umes, Proc. of 23rd IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2014),
pp. 385–390, 2014.
[3] Y. Sarai and Y. Maeda: Robot Programming for Manipulators through Volume Sweeping and Augmented
Reality, Proc. of 13th IEEE Conf. on Automation Science and Engineering (CASE 2017), pp. 302–307,
2017.
[4] K. Takahashi and Y. Maeda: A Robot Programming System Based on ROS/MoveIt Utilizing AR: Im-
plementation of Motion Planning Function Based on Volume Sweeping, Proc. of SICE 23rd Conf. on
System Integration (SI2022), pp. 994–998, 2022 (in Japanese).
[5] H. Ihara and Y. Maeda: A Robot Programming System with Teach Point Manipulation and Motion Plan-
ning to Adapt Product Specification Change, Proc. of SICE 22nd Conf. on System Integration (SI2021),
pp. 3263–3267, 2021 (in Japanese).
Fig. 4 AR Display of Swept Volume and Planned Path
Fig. 5 AR Display of Movable
Area with Fixed Gripper Pose
3
2023–2024 Maeda Lab, Yokohama National University
View-Based Teaching/Playback
We developed a teaching/playback method based on camera images for industrial manipulators
[1][2]. In this method, robot motions and scene images in human demonstrations are recorded to
obtain an image-to-motion mapping, and the mapping is used for playback (Fig. 6). It can achieve
more robustness against changes of task conditions than conventional joint-variable-based teach-
ing/playback. Our method adopts end-to-end learning through view-based image processing and
therefore neither object models nor camera calibration are necessary. We are improving our view-
based teaching/playback by using range images (Fig. 7) and occlusion-aware techniques for more
robustness [3]. For application to force-control tasks, visualization of force information based on
photoelasticity (Fig. 8) is under investigation [4]. We are also trying to integrate reinforcement
learning with the view-based teaching/playback for reduction of human operations for teaching
[5].
References
[1] Y. Maeda and T. Nakamura: View-based teaching/playback for robotic manipulation, ROBOMECH J.,
Vol. 2, 2, 2015.
[2] Y. Maeda and Y. Moriyama: View-Based Teaching/Playback for Industrial Manipulators, Proc. of 2011
IEEE Int. Conf. on Robotics and Automation (ICRA 2011), pp. 4306–4311, 2011.
[3] Y. Maeda and Y. Saito: Lighting- and Occlusion-robust View-based Teaching/Playback for Model-free
Robot Programming, W. Chen et al. eds., Intelligent Autonomous Systems 14, pp. 939–952, Springer,
2017.
[4] Y. Nakagawa, Y. Maeda and S. Ishii: View-Based Teaching/Playback with Photoelasticity for Force-
Control Tasks, W. Chen et al. eds., Intelligent Autonomous Systems 14, pp. 825–837, Springer, 2017.
[5] Y. Maeda and R. Aburata: Teaching and Reinforcement Learning of Robotic View-Based Manipulation,
Proc. of 22nd IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2013), pp.
87–92, 2013.
camera
robot
object
human instruction
(a) human teaching
mapping
image
robot
motion
(b) image-to-motion mapping
camera
robot
object
mapping
(c) view-based playback
Fig. 6 Outline of View-Based Teaching/Playback
0 100 200 300 400 500
frame number
Grayscale
Images
Light
OFF
Range
Images
Fig. 7 Switching between Grayscale and Range
Images for View-Based Teaching/Playback
Fig. 8 View-based Teaching/
Playback with Photoelasticity
4
2023–2024 Maeda Lab, Yokohama National University
Caging and Caging-based Grasping
Caging is a method to constrain objects geometrically so that they cannot escape from a “cage”
constituted of robot bodies.
3D multifingered caging While most of related studies deal with planar caging, we study three-
dimensional caging by multifingered robot hands (Fig. 9). Caging does not require force con-
trol, and therefore it is well-suited to current robotic devices and contributes to provide a variety
of options of robotic manipulation. We are investigating sufficient conditions for 3D multi-
fingered caging and developing an algorithm to plan hand motions for caging based on the
conditions [1]. Robot motions generated by the developed planning algorithm were validated
on an arm-hand system [2] (Fig. 10).
Caging-based Grasping Position-controlled robot hands can capture an object and manipulate
it via caging without force sensing or force control. However, the object in caging is movable in
the closed region, which is not allowed in some applications. In such cases, grasping is required.
We proposed a new simple approach to grasping by position-controlled robot hands: caging-
based grasping by robot fingers with rigid parts and outer soft parts. In caging-based grasping,
we cage an object with the rigid parts of a robot hand, and construct a complete grasp with
the soft parts of the hand. We are studying the formal definition of the caging-based grasping
and concrete conditions for caging-based grasping in planar and spatial cases. Based on the
derived conditions, we demonstrated planar caging-based grasping by mobile robots and spatial
caging-based grasping by a multifingered hand (Fig. 11) [3][4]. We also extend the theory of
caging-based grasping so that it can deal with deformable objects (Fig. 12) ([5]).
References
[1] S. Makita and Y. Maeda: 3D Multifingered Caging: Basic Formulation and Planning, Proc. of 2008
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2008), pp. 2697–2702, 2008.
[2] S. Makita, K. Okita and Y. Maeda: 3D Two-Fingered Caging for Two Types of Objects: Sufficient
Conditions and Planning, Int. J. of Mechatronics and Automation, Vol. 3, No. 4, pp. 263–277, 2013.
[3] Y. Maeda, N. Kodera and T. Egawa: Caging-Based Grasping by a Robot Hand with Rigid and Soft Parts,
Proc. of 2012 IEEE Int. Conf. on Robotics and Automation (ICRA 2012), pp. 5150–5155, 2012.
[4] T. Egawa, Y. Maeda and H. Tsuruga: Two- and Three-dimensional Caging-Based Grasping of Objects
of Various Shapes with Circular Robots and Multi-Fingered Hands, Proc. of 41st Ann. Conf. of IEEE
Industrial Electronics Soc. (IECON 2015), pp. 643–648, 2015.
[5] D. Kim, Y. Maeda and S. Komiyama: Caging-based Grasping of Deformable Objects for Geometry-
based Robotic Manipulation
, ROBOMECH J., Vol. 6, 3, 2019.
Fig. 9 3D Multifin-
gered Caging
Fig. 10 Caging of a
Sphere
Fig. 11 Caging-
based Grasping by a
Multifingered Hand
Fig. 12 Caging-
based Grasping of a
Deformable Object
5
2023–2024 Maeda Lab, Yokohama National University
Caging Manipulation
Caging is a method to make an object inescapable from a closed region geometrically. We study
robotic manipulation with caging, or “caging manipulation.
In-Hand Caging Manipulation Pose of objects caged in robot hands can be controlled to some
extent by changing hand configurations. We call it “in-hand caging manipulation. It enables
position-controlled robot hands to perform robust in-hand manipulation. A planning algorithm
for in-hand caging manipulation was developed [1][2]. We are also studying various forms of
in-hand caging manipulation [3] including versatile part feeders [4] (Fig. 13).
Cooperative Caging Manipulation The object is not fully constrained in caging. This nature
enables cooperative manipulation based on position control without excessive internal forces.
We study dual-arm cooperative manipulation of long objects with caging or caging-based grasp-
ing (Fig. 14) [5]. It does not require force control, and can deal with a variety of objects by
using appropriate end-effectors.
References
[1] Y. Maeda and T. Asamura: Sensorless In-hand Caging Manipulation, W. Chen et al. eds., Intelligent
Autonomous Systems 14, pp. 255–267, Springer, 2017.
[2] S. Komiyama and Y. Maeda: Position and Orientation Control of Polygonal Objects by Sensorless
In-hand Caging Manipulation, Proc. of IEEE Int. Conf. on Robotics and Automation (ICRA 2021),
pp. 6244–6249, 2021.
[3] Y. Maeda, T. Asamura, T. Egawa and Y. Kurata: Geometry-Based Manipulation through Robotic Caging,
IEEE/RSJ IROS 2014 Workshop on Robot Manipulation: What has been achieved and what remains to
be done?, 2014.
[4] H. Kamikukita, Y. Nakanishi and Y. Maeda: Realization of a General-purpose Part Feeder with Sensor-
less In-hand Caging Manipulation, Proc. of SICE 22nd Conf. on System Integration (SI2021), pp. 3246–
3248, 2021 (in Japanese).
[5] Y. Hiraki and Y. Maeda: Caging-based Dual-arm Cooperation without Force Control, Proc. of JSME
Conf. on Robotics and Mechatronics 2020 (ROBOMECH 2020), 2A1-M05, 2020 (in Japanese).
Fig. 13 A Versatile Part
Feeder with In-Hand Caging
Manipulation
(a) wire harness (b) long pipe
Fig. 14 Dual-arm Cooperative Manipulation with Caging
6
2023–2024 Maeda Lab, Yokohama National University
Photoelastic Force Distribution Sensing and Its Applications
Photoelasticity enables us to conduct pixelwise stress analysis by using a photoelastic body, a po-
larized light source and a polarization camera. The distribution of contact forces at the photoelastic
body can also be estimated. We developed a robot finger equipped with a photoelastic fingertip
(Fig. 15), which can perform online contact force distribution sensing and contact force control
[1]. We also developed a robot hand with photoelastic links (Fig. 16) with force sensing ability
[2].
References
[1] M. Kohama and Y. Maeda: Photoelasticity-based Online Force Distribution Sensing And Its Application
to Pressing Force Control, IFAC 2023 World Congress, 2023 (to appear).
[2] Y. Tahara, H. Kondo, M. Kohama and Y. Maeda: Development of a Force-sensible Robot Hand with
Photoelastic Links —Improvement of Stress Distribution Analysis And Its Evaluation—, J. of Robotics
Soc. of Japan, Vol. 41, 2023 (in Japanese, to appear).
Polarization camera
(VCXU-50MP)
Green light filter
Wall attached to
force-torque sensor
Quarter wave film
Light source display
(linear polarization)
2-DoF robot
Polyurethane
photoelastic resin
Fig. 15 A robot finger with photoelastic fingertip
Fig. 16 A robot hand composed
of photoelastic bodies
7
2023–2024 Maeda Lab, Yokohama National University
Handling of Various Objects by Robots
Techniques for robotic manipulation of a variety of objects are under investigation.
Vision-Based Object Picking Robotic bin-picking is more flexible and versatile than the use
of conventional part feeders, and therefore it is effective for low-volume production. Many
bin-picking techniques have been proposed, and some of them are in actual use. However, it
is difficult to apply these existing techniques to coil springs, due to their shape characteristics.
Thus we developed a dedicated method to recognize and localize coil springs in a pile, which
enabled robotic bin-picking of coil springs (Fig. 17) [1]. Additionally we are developing an
impacting-based method to detect unknown objects for picking (Fig. 18) [2].
3D Block Printing We developed a robotic 3D printer: a robot system that can assemble toy
brick sculptures from their 3D CAD models [3][4]. In this system, a 3D CAD model is auto-
matically converted to a block model consisting of primitive toy blocks. Then an assembly plan
of the block model is automatically generated, if feasible. According to the plan, an industrial
robot assembles a brick sculpture layer by layer from bottom to top. We demonstrate successful
assembly of several brick sculptures (Fig. 19).
References
[1] K. Ono, T. Ogawa, Y. Maeda, S. Nakatani, G. Nagayasu, R. Shimizu and N. Ouchi: Detection, Lo-
calization and Picking Up of Coil Springs from a Pile, Proc. of 2014 IEEE Int. Conf. on Robotics and
Automation (ICRA 2014), pp. 3477–3482, 2014.
[2] Y. Maeda, H. Tsuruga, H. Honda and S. Hirono: Unknown Object Detection by Punching: An Impacting-
based Approach to Picking Novel Objects, M. Strand et al. eds., Intelligent Autonomous Systems 15,
pp. 668–678, Springer, 2018.
[3] Y. Maeda, O. Nakano, T. Maekawa and S. Maruo: From CAD Models to Toy Brick Sculptures: A
3D Block Printer, Proc. of 2016 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2016),
pp. 2167–2172, 2016.
[4] M. Kohama, C. Sugimoto, O. Nakano and Y. Maeda: Robotic Additive Manufacturing with Toy Blocks,
IISE Trans., Vol. 53, No. 3, pp. 273–284, 2021.
Fig. 17 Bin-Picking of Coil Springs Fig. 18 Impacting-based Picking
Fig. 19 3D Block Printing
Fig. 20 A Robot System to
Fold a Paper Crane
8
2023–2024 Maeda Lab, Yokohama National University
Intelligent Heavy Equipment Systems
Automation and intellitization of heavy machinery is immensely demanded for higher efficiency
and safety. We study traffic control of dump truck fleets in mines (Fig. 21) to improve productivity.
A combinatorial optimization method is developed for the order of passing intersections and tested
on a simulator (Fig. 22) [1][2]
References
[1] Y. Ogawa, Y. Maeda, Y. Matsui, A. Sakai, K. Osagawa and K. Takeda: Traffic Control of Dump Truck
Fleets at Intersections for Mining Productivity Improvement, Trans. of JSME, Vol. 87, No. 894, 20-
00097, 2021 (in Japanese).
[2] Y. Maeda, Y. Ogawa, K. Osagawa, A. Sakai and Y. Matsui: Worksite Management System And Worksite
Management Method, World patent application WO/2021/145392.
7 6
5
4321
0
89
11
10
12
13
14
15
DUMP
DUMP
LOAD
LOAD
STANDBY
STANDBY
Fig. 21 Dump Truck Fleets in a Mine Fig. 22 A Simulator of Dump Truck Fleets
9
2023–2024 Maeda Lab, Yokohama National University
Modeling and Measurement of Human Hands and Their Dex-
terity
The theory of robotic manipulation can be applied to analysis of human hands and their dexterity.
Understanding of human dexterity is very important to implement high dexterity on robots. We
are conducting some studies on modeling of human hands and skills jointly with Living Activity
Modeling Research Team, AIST.
Modeling and Measurement of Human Hands We are developing a method for generating
computational models of human hands that have links and skins to represent their motion using
motion capture. The applications of the digital hand models include modeling range of motion
with subjective discomfort in human hands [1]. We also study a simple grasp measurement
device (Fig. 23) [2].
Grasp Measurement and Synthesis Digital hands can be used to synthesize grasps for sup-
porting ergonomic product design (Fig. 24) [3]. Grasps by hands of patients with carpal tunnel
syndrome and elderly people can be simulated (Fig. 25) [4].
References
[1] N. Miyata, Y. Yoneoka and Y. Maeda: Modeling the Range of Motion and the Degree of Posture Discom-
fort of the Thumb Joints, S. Bagnara et al. eds., Proceedings of the 20th Congress of the International
Ergonomics Association (IEA 2018), Volume V: Human Simulation and Virtual Environments, Work
With Computing Systems (WWCS), Process Control, pp. 324–329, Springer, 2018.
[2] N. Miyata, K. Honoki, Y. Maeda, Y. Endo, M. Tada and Y. Sugiura: Wrap & Sense: Grasp Capture by
a Band Sensor, Adjunct Proc. of 29th Annual Symp. on User Interface Software and Technology (UIST
2016), pp. 87–89, 2016.
[3] T. Hirono, N. Miyata and Y. Maeda: Grasp Synthesis for Variously-Sized Hands Using a Grasp Database
That Covers Variation of Contact Region, Proc. of 3rd Int. Digital Human Modeling Symp. (DHM 2014),
11, 2014.
[4] R. Takahashi, N. Miyata, Y. Maeda and Y. Nakanishi: Grasp Synthesis Considering Graspability for a
Digital Hand with Limited Thumb Range of Motion, Advanced Robotics, Vol. 36, No. 4, pp. 192–204,
2022.
Fig. 23 “Wrap & Sense”
Robust
Gracile
LongShort
Fig. 24 Grasp Synthesis for
Various Hands
Fig. 25 A Synthe-
sized Grasp of a Uni-
versal Design Knife
by An Elderly Hand
10
2023–2024 Maeda Lab, Yokohama National University
Application of Robot Technology to Human Activity Support
Robot technology should be applied to various fields to support human activities. For example,
home appliances would be robotized more and more to help our daily life intelligently and ef-
fectively. We have a proposal on smart dishwashers: our proposed system can support users’
dishwasher loading [1][2][3]. This system can recognize dishes from a picture of a dining table
after a meal. Then the system calculates the optimal placement of the recognized dishes in the
dishwasher and presents the result to users as 3D graphics (Fig. 26).
We are also developing a support system for human origami folding [4][5]. It is composed of
an origami simulator for design and display of origami folding processes (Fig. 27) and a cutting
plotter for adding crease pattern automatically. The system can be used in childhood education
and elderly care.
References
[1] Y. Kurata and Y. Maeda: Toward a Smart Dishwasher: A Support System for Optimizing Dishwasher
Loading, IPSJ SIG Technical Report, Vol. 2016-CDS-16, No. 9, 2016 (in Japanese).
[2] K. Imai and Y. Maeda: A User Support System That Optimizes Dishwasher Loading, Proc. of 2017 IEEE
6th Global Conf. on Consumer Electronics (GCCE 2017), pp. 523–524, 2017.
[3] Y. Ogawa and Y. Maeda: Support of Dishwasher Loading by Counting the Number of Dishes with Image
Processing, Proc. of JSME Conf. on Robotics and Mechatronics 2018 (ROBOMECH 2018), 2A2-J17,
2018 (in Japanese).
[4] Y. Nakajima and Y. Maeda: An Origami Support System by Automated Crease Addition and Folding
Process Display, Proc. of 38th RSJ Annual Conf., RSJ2020AC3J1-03, 2020 (in Japanese).
[5] N. Suzuki, Y. Nakajima and Y. Maeda: An Origami Assist System by Simulating Origami with Papers
of Nonzero Thickness and Presenting Folding Processes using Augmented Reality, Proc. of JSME Conf.
on Robotics and Mechatronics (ROBOMECH 2021), 2A1-M06, 2021 (in Japanese).
(a) Calculated Result (b) Loaded Dishes
Fig. 26 Optimized Dish Loading Fig. 27 Origami Simulator
11