Seerose

Seerose
Starts from:Mon, February 1, 2016
Campus Location

Tags
  • Service Robotics
  • Smart Home
Class Description

The research project deals with the integration of service robots into an intelligent home environment to cooperatively accomplish tasks. This combination will extend the sensory variety of the smart home and the service robots and will allow more sophisticated applications. One focus of the project is the development of an adequate middleware architecture to enable communication between different agents in the system, e.g. robots, sensors and actuators. Furthermore, the robots will be programmable by employing user-friendly learning from demonstration techniques. The whole project is embedded into a health care scenario to support elderly people in their everday lives.

Videos

Publications

  • D. Sprute, K. Tönnies, and M. König, “Virtual Borders: Accurate Definition of a Mobile Robot’s Workspace Using Augmented Reality,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, p. 8574–8581.
    [BibTeX] [Abstract] [Download PDF]

    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.

    @inproceedings{sprute:2018b,
    author = {Dennis Sprute and Klaus T{\"o}nnies and Matthias K{\"o}nig},
    booktitle={{2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}},
    title = {{Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality}},
    year = {2018},
    pages={8574--8581},
    url = {https://arxiv.org/abs/1709.00954},
    abstract={We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.}
    }

  • A. Pörtner, L. Schröder, R. Rasch, D. Sprute, M. Hoffmann, and M. König, “The Power of Color: A Study on the Effective Use of Colored Light in Human-Robot Interaction,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, p. 3395–3402.
    [BibTeX] [Abstract] [Download PDF]

    In times of more and more complex interaction techniques, we point out the powerfulness of colored light as a simple and cheap feedback mechanism. Since it is visible over a distance and does not interfere with other modalities, it is especially interesting for mobile robots. In an online survey, we asked 56 participants to choose the most appropriate colors for scenarios that were presented in the form of videos. In these scenarios a mobile robot accomplished tasks, in some with success, in others it failed because the task is not feasible, in others it stopped because it waited for help. We analyze in what way the color preferences differ between these three categories. The results show a connection between colors and meanings and that it depends on the participants’ technical affinity, experience with robots and gender how clear the color preference is for a certain category. Finally, we found out that the participants’ favorite color is not related to color preferences.

    @inproceedings{poertner:2018a,
    author = {Aljoscha P{\"o}rtner and Lilian Schr{\"o}der and Robin Rasch and Dennis Sprute and Martin Hoffmann and Matthias K{\"o}nig},
    booktitle={{2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}},
    title = {{The Power of Color: A Study on the Effective Use of Colored Light in Human-Robot Interaction}},
    year = {2018},
    pages={3395--3402},
    url = {https://arxiv.org/abs/1802.07557},
    abstract={In times of more and more complex interaction techniques, we point out the powerfulness of colored light as a simple and cheap feedback mechanism. Since it is visible over a distance and does not interfere with other modalities, it is especially interesting for mobile robots. In an online survey, we asked 56 participants to choose the most appropriate colors for scenarios that were presented in the form of videos. In these scenarios a mobile robot accomplished tasks, in some with success, in others it failed because the task is not feasible, in others it stopped because it waited for help. We analyze in what way the color preferences differ between these three categories. The results show a connection between colors and meanings and that it depends on the participants' technical affinity, experience with robots and gender how clear the color preference is for a certain category. Finally, we found out that the participants' favorite color is not related to color preferences.}
    }

  • A. Pörtner, M. Hoffmann, S. Zug, and M. König, “SwarmRob: A Toolkit for Reproducibility and Sharing of Experimental Artifacts in Robotics Research,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2018.
    [BibTeX] [Abstract] [Download PDF]

    Due to the complexity of robotics, the reproducibility of results and experiments is one of the fundamental problems in robotics research. While the problem has been identified by the community, the approaches that address the problem appropriately are limited. The toolkit proposed in this paper tries to deal with the problem of reproducibility and sharing of experimental artifacts in robotics research by a holistic approach based on operating-system-level virtualization. The experimental artifacts of an experiment are isolated in “containers” that can be distributed to other researchers. Based on this, this paper presents a novel experimental workflow to describe, execute and distribute experimental software-artifacts to heterogeneous robots dynamically. As a result, the proposed solution supports researchers in executing and reproducing experimental evaluations.

    @inproceedings{poertner:2018b,
    author = {Aljoscha P{\"o}rtner and Martin Hoffmann and Sebastian Zug and Matthias K{\"o}nig},
    booktitle={{2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)}},
    title = {{SwarmRob: A Toolkit for Reproducibility and Sharing of Experimental Artifacts in Robotics Research}},
    year = {2018},
    url = {https://arxiv.org/abs/1801.04199},
    abstract={Due to the complexity of robotics, the reproducibility of results and experiments is one of the fundamental problems in robotics research. While the problem has been identified by the community, the approaches that address the problem appropriately are limited. The toolkit proposed in this paper tries to deal with the problem of reproducibility and sharing of experimental artifacts in robotics research by a holistic approach based on operating-system-level virtualization. The experimental artifacts of an experiment are isolated in "containers" that can be distributed to other researchers. Based on this, this paper presents a novel experimental workflow to describe, execute and distribute experimental software-artifacts to heterogeneous robots dynamically. As a result, the proposed solution supports researchers in executing and reproducing experimental evaluations.}
    }

  • R. Rasch, S. Wachsmuth, and M. König, “A Joint Motion Model for Human-Like Robot-Human Handover,” in 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), 2018.
    [BibTeX] [Abstract] [Download PDF]

    In future, robots will be present in everyday life. The development of these supporting robots is a challenge. A fundamental task for assistance robots is to pick up and hand over objects to humans. By interacting with users, soft factors such as predictability, safety and reliability become important factors for development. Previous works show that collaboration with robots is more acceptable when robots behave and move human-like. In this paper, we present a motion model based on the motion profiles of individual joints. These motion profiles are based on observations and measurements of joint movements in human-human handover. We implemented this joint motion model (JMM) on a humanoid and a non-humanoidal industrial robot to show the movements to subjects. Particular attention was paid to the recognizability and human similarity of the movements. The results show that people are able to recognize human-like movements and perceive the movements of the JMM as more human-like compared to a traditional model. Furthermore, it turns out that the differences between a linear joint space trajectory and JMM are more noticeable in an industrial robot than in a humanoid robot.

    @inproceedings{rasch:2018a,
    author = {Robin Rasch and Sven Wachsmuth and Matthias K{\"o}nig},
    title = {{A Joint Motion Model for Human-Like Robot-Human Handover}},
    year = {2018},
    booktitle = {{2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)}},
    url = {https://arxiv.org/abs/1808.09280},
    abstract={In future, robots will be present in everyday life. The development of these supporting robots is a challenge. A fundamental task for assistance robots is to pick up and hand over objects to humans. By interacting with users, soft factors such as predictability, safety and reliability become important factors for development. Previous works show that collaboration with robots is more acceptable when robots behave and move human-like. In this paper, we present a motion model based on the motion profiles of individual joints. These motion profiles are based on observations and measurements of joint movements in human-human handover. We implemented this joint motion model (JMM) on a humanoid and a non-humanoidal industrial robot to show the movements to subjects. Particular attention was paid to the recognizability and human similarity of the movements. The results show that people are able to recognize human-like movements and perceive the movements of the JMM as more human-like compared to a traditional model. Furthermore, it turns out that the differences between a linear joint space trajectory and JMM are more noticeable in an industrial robot than in a humanoid robot.}
    }

  • D. Sprute, R. Rasch, A. Pörtner, S. Battermann, and M. König, “Gesture-Based Object Localization for Robot Applications in Intelligent Environments,” in 2018 International Conference on Intelligent Environments (IE), 2018, p. 48–55.
    [BibTeX] [Abstract] [Download PDF]

    Drawing attention to objects and their localization in the environment are essential building blocks for domestic robot applications, e.g. fetch-and-delivery or navigation tasks. For this purpose, human pointing gestures turned out to be a natural and intuitive interaction method to transfer the spatial data of an object from human to robot. Current approaches only use the robot’s on-board sensors to perceive gesture-based instructions, which restricts them to the field of view of the robot’s camera. The integration of mobile robots into intelligent environments, such as smart homes, opens new possibilities to overcome this limitation by utilizing components of the surrounding environment as additional sensors. We take advantage of these new possibilities and propose a multi-stage object localization system based on human pointing gestures that considers the whole intelligent environment as interaction partner. Our experimental results show that our multi-stage approach successfully refines the position initially proposed by a human pointing gesture by employing a distributed camera network integrated into the environment for object localization.

    @inproceedings{sprute:2018a,
    author={Dennis Sprute and Robin Rasch and Aljoscha Pörtner and Sven Battermann and Matthias König},
    booktitle={{2018 International Conference on Intelligent Environments (IE)}},
    title={{Gesture-Based Object Localization for Robot Applications in Intelligent Environments}},
    year={2018},
    pages={48--55},
    url = {https://ieeexplore.ieee.org/document/8595031},
    abstract= {Drawing attention to objects and their localization in the environment are essential building blocks for domestic robot applications, e.g. fetch-and-delivery or navigation tasks. For this purpose, human pointing gestures turned out to be a natural and intuitive interaction method to transfer the spatial data of an object from human to robot. Current approaches only use the robot's on-board sensors to perceive gesture-based instructions, which restricts them to the field of view of the robot's camera. The integration of mobile robots into intelligent environments, such as smart homes, opens new possibilities to overcome this limitation by utilizing components of the surrounding environment as additional sensors. We take advantage of these new possibilities and propose a multi-stage object localization system based on human pointing gestures that considers the whole intelligent environment as interaction partner. Our experimental results show that our multi-stage approach successfully refines the position initially proposed by a human pointing gesture by employing a distributed camera network integrated into the environment for object localization.}
    }

  • R. Rasch, S. Wachsmuth, and M. König, “Understanding Movements of Hand-Over Between two Persons to Improve Humanoid Robot Systems,” in 2017 IEEE-RAS 17th International Conference on Humanoid Robots (Humanoids), 2017, p. 856–861.
    [BibTeX] [Abstract] [Download PDF]

    To enable personal robots to operate in human spaces, it is necessary that robots support everyday tasks like handing over an object. Studies show that robots have to move and behave human-like, to improve social acceptance. Therefore, it is necessary to study and model human movements. This paper studies and analyses the movements of arms during hand-over between two persons in order to extract the characteristic features (elementary movements of joints, duration, angular and linear velocities, etc.). In the present study, we are using inertial measurement units with 6-axis (gyroscope and accelerometer) on wrist, elbow and shoulder to measure the movements and evaluate them. Our results show a general movement pattern for hand-overs between humans with two variants of twisting the elbow. The results of our study provide a basis for developing a human-like hand-over controller for humanoid robot systems or human like manipulators.

    @inproceedings{rasch:2017b,
    author = {Rasch, Robin and Wachsmuth, Sven and K\"{o}nig, Matthias},
    title = {{Understanding Movements of Hand-Over Between two Persons to
    Improve Humanoid Robot Systems}},
    booktitle = {{2017 IEEE-RAS 17th International Conference on Humanoid Robots (Humanoids)}},
    year = {2017},
    month={11},
    pages = {856--861},
    abstract= {To enable personal robots to operate in human spaces, it is necessary that robots support everyday tasks like handing over an object. Studies show that robots have to move and behave human-like, to improve social acceptance. Therefore, it is necessary to study and model human movements. This paper studies and analyses the movements of arms during hand-over between two persons in order to extract the characteristic features (elementary movements of joints, duration, angular and linear velocities, etc.). In the present study, we are using inertial measurement units with 6-axis (gyroscope and accelerometer) on wrist, elbow and shoulder to measure the movements and evaluate them. Our results show a general movement pattern for hand-overs between humans with two variants of twisting the elbow. The results of our study provide a basis for developing a human-like hand-over controller for humanoid robot systems or human like manipulators.},
    url={http://ieeexplore.ieee.org/document/8246972/}
    }

  • R. Rasch and M. König, “Human-Assisted Learning of Object Models Through Active Object Exploration,” in Proceedings of the 5th International Conference on Human Agent Interaction, 2017, p. 387–391.
    [BibTeX] [Abstract] [Download PDF]

    As robots are increasingly acting in real-world environments, learning and recognition of objects is a problem. Existing methods for learning visual object models use offline techniques to generate high-quality models or online techniques to dynamically expand the object model library. We present an online learning method that creates visual object models through active object exploration. Our approach enables a robot to use manipulations of an object to learn autonomously visual features from several points of view. The ability to segment background, robot parts and the object in the visual space allows to filter irrelevant feature points. This improves the quality of the object model while decreasing its size. Finally, a human-robot interaction enables a human collaborator to improve the object model. The method is evaluated on a Pepper robot, showing the improvement in performance and accuracy with respect to interactive learning.

    @inproceedings{rasch:2017a,
    author = {Rasch, Robin and K\"{o}nig, Matthias},
    title = {{Human-Assisted Learning of Object Models Through Active Object Exploration}},
    booktitle = {{Proceedings of the 5th International Conference on Human Agent Interaction}},
    series = {HAI '17},
    year = {2017},
    month={10},
    isbn = {978-1-4503-5113-3},
    pages = {387--391},
    url = {http://doi.acm.org/10.1145/3125739.3132601},
    publisher = {ACM},
    abstract= {As robots are increasingly acting in real-world environments, learning and recognition of objects is a problem. Existing methods for learning visual object models use offline techniques to generate high-quality models or online techniques to dynamically expand the object model library. We present an online learning method that creates visual object models through active object exploration. Our approach enables a robot to use manipulations of an object to learn autonomously visual features from several points of view. The ability to segment background, robot parts and the object in the visual space allows to filter irrelevant feature points. This improves the quality of the object model while decreasing its size. Finally, a human-robot interaction enables a human collaborator to improve the object model. The method is evaluated on a Pepper robot, showing the improvement in performance and accuracy with respect to interactive learning.},
    }

  • D. Sprute, R. Rasch, K. Tönnies, and M. König, “A Framework for Interactive Teaching of Virtual Borders to Mobile Robots,” in 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2017, pp. 1175-1181.
    [BibTeX] [Abstract] [Download PDF]

    The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot’s workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot’s workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.

    @inproceedings{sprute:2017b,
    author = {Dennis Sprute and Robin Rasch and Klaus T{\"o}nnies and Matthias K{\"o}nig},
    title = {{A Framework for Interactive Teaching of Virtual Borders to Mobile Robots}},
    booktitle={{2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)}},
    pages={1175-1181},
    year = {2017},
    month={09},
    url = {http://ieeexplore.ieee.org/document/8172453/},
    abstract={The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot’s workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot’s workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.}
    }

  • D. Sprute, A. Pörtner, R. Rasch, S. Battermann, and M. König, “Ambient Assisted Robot Object Search,” in Enhanced Quality of Life and Smart Living: 15th International Conference on Smart Homes and Health Telematics, ICOST 2017, Paris, France, 2017, p. 112–123.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we integrate a mobile service robot into a smart home environment in order to improve the search of objects by a robot. We propose a hierarchical search system consisting of three layers: (1) local search, (2) global search and (3) exploration. This approach extends the sensory variety of the mobile service robot by employing additional smart home sensors for the object search. Therefore, the robot is no more limited to its on-board sensors. Furthermore, we provide a visual feedback system integrated into the smart home to effectively inform the user about the current state of the search process. We evaluated our system in a fetch-and-delivery task, and the experimental results revealed a more efficient and faster search compared to a search without support of a smart home. Such a system can assist elderly people, especially people with cognitive impairments, in their home environments and support them to live self-determined in old age.

    @inproceedings{sprute:2017a,
    author={Sprute, Dennis and P{\"o}rtner, Aljoscha and Rasch, Robin and Battermann, Sven and K{\"o}nig, Matthias},
    title={{Ambient Assisted Robot Object Search}},
    bookTitle={{Enhanced Quality of Life and Smart Living: 15th International Conference on Smart Homes and Health Telematics, ICOST 2017, Paris, France}},
    year={2017},
    month={08},
    publisher={Springer International Publishing},
    pages={112--123},
    abstract={In this paper, we integrate a mobile service robot into a smart home environment in order to improve the search of objects by a robot. We propose a hierarchical search system consisting of three layers: (1) local search, (2) global search and (3) exploration. This approach extends the sensory variety of the mobile service robot by employing additional smart home sensors for the object search. Therefore, the robot is no more limited to its on-board sensors. Furthermore, we provide a visual feedback system integrated into the smart home to effectively inform the user about the current state of the search process. We evaluated our system in a fetch-and-delivery task, and the experimental results revealed a more efficient and faster search compared to a search without support of a smart home. Such a system can assist elderly people, especially people with cognitive impairments, in their home environments and support them to live self-determined in old age.},
    isbn={978-3-319-66188-9},
    url={https://doi.org/10.1007/978-3-319-66188-9_10}
    }

  • R. Rasch, A. Pörtner, M. Hoffmann, and M. König, “A Decoupled Three-layered Architecture for Service Robotics in Intelligent Environments,” in Proceedings of the 1st Workshop on Embodied Interaction with Smart Environments, Tokyo, Japan, 2016, p. 1:1-1:8.
    [BibTeX] [Abstract] [Download PDF]

    To enable the usage of service robots in a smart environment we have developed a decoupled three-layered architecture. The architecture is separated into three parts: hardware abstraction layer, domain level as well as control and collaboration layer. The hardware abstraction layer considers hardware drivers and elementary functions, like inverse kinematics for manipulators. The domain level wraps the hardware in autonomous agents. They offer features in the system, can communicate with each other and are responsible for local task planning. The third layer enables the global task planning and allows the system to solve more complex tasks that are shared between single agents. The layer is based on an ontology to reduce the effort of implementing and incorporating new agents. Purpose of this work is to build a slim, reliable and effective architecture, which is able to work with heterogeneous robot and smart-home hardware. Furthermore, it should be able to solve dynamically tasks with the available resources.

    @inproceedings{rasch:2016,
    author = {Rasch, Robin and Pörtner, Aljoscha and Hoffmann, Martin and König, Matthias},
    title = {{A Decoupled Three-layered Architecture for Service Robotics in Intelligent Environments}},
    booktitle = {{Proceedings of the 1st Workshop on Embodied Interaction with Smart Environments, Tokyo, Japan}},
    series = {EISE '16},
    year = {2016},
    month ={11},
    isbn = {978-1-4503-4555-2},
    pages = {1:1-1:8},
    url = {http://doi.acm.org/10.1145/3008028.3008032},
    publisher = {ACM},
    abstract = {To enable the usage of service robots in a smart environment we have developed a decoupled three-layered architecture. The architecture is separated into three parts: hardware abstraction layer, domain level as well as control and collaboration layer. The hardware abstraction layer considers hardware drivers and elementary functions, like inverse kinematics for manipulators. The domain level wraps the hardware in autonomous agents. They offer features in the system, can communicate with each other and are responsible for local task planning. The third layer enables the global task planning and allows the system to solve more complex tasks that are shared between single agents. The layer is based on an ontology to reduce the effort of implementing and incorporating new agents. Purpose of this work is to build a slim, reliable and effective architecture, which is able to work with heterogeneous robot and smart-home hardware. Furthermore, it should be able to solve dynamically tasks with the available resources.}
    }

Cooperation Partner

Robert Bosch GmbH

Project Funding

6662meg_817a89e56a92dca