How can robots understand the actions of other agents (humans and other robots), infer their internal states and intentions, and build models of their behaviour to facilitate better interaction & collaboration? Our laboratory’s interdisciplinary research spans several areas, including human-centred computer vision, machine learning, multiscale user modelling, cognitive architectures, and shared control. We aim at advance fundamental theoretical concepts in these fields without ignoring the engineering challenges of the real world, so our experiments involve real robots, humans, and tasks.
Feel free to contact us if you have any queries, are interested in joining us as a student or a researcher, or have a great idea for scientific collaboration.
Research Themes
- Multimodal Perception of Human Actions and Inference of Internal Human States
- Multiscale Human Modelling & Human-in-the-Loop Digital Twins
- Skill Representation and Learning in Humans and Robots
- Motion Planning in Individual and Collaborative Tasks
- Mixed Reality for Human-Robot Interaction
- Explainability, Trust and Privacy in Human-Robot Interaction
- Robotic Caregivers and Physical Interaction
At the core of our research lies the ability of the robot system to perceive what humans are doing and infer what their internal cognitive states (including beliefs and intentions) are. We use multimodal signals (RGB-D, event-based (DVS), thermal signals, haptic and audio) to perform this inference. We research most pipeline steps of human action perception (eye tracking, pose estimation and tracking, human motion analysis, and action segmentation). We collect and publish our datasets for the community’s benefit (see section Software on this website).
Key Publications:
- X. Zhang, P. Angeloudis and Y. Demiris, 2022, , in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 11, pp. 20773-20782.
- H. Razali and Y. Demiris, 2022, , 2022 International Conference on Robotics and Automation (ICRA), pp. 8497-8503.
- Y. Jang and Y. Demiris, 2022, , 2022 International Conference on Robotics and Automation (ICRA), pp. 8726-8733.
- U. M. Nunes and Y. Demiris, 2022, , 2022 International Conference on Robotics and Automation (ICRA), pp. 508-514.
- U. M. Nunes and Y. Demiris, 2021, , in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 12, pp. 9561-9573.
- Nunes, U. M., & Demiris, Y., 2020, , in European Conference on Computer Vision, pp. 161-176.
- V. Schettino and Y. Demiris, 2019, , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4600-4606.
- Buizza C, Fischer T, Demiris Y, 2019, , IEEE Winter Conference on Applications of Computer Vision.
- Fischer T, Chang HJ, Demiris Y, 2018, , Proceedings of the European Conference on Computer Vision, pp:339-357
- Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Y, 2018, , IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp:2920-2934.
- Choi, J., Chang, H. J., Fischer, T., Yun, S., Lee, K., Jeong, J., ... & Choi, J. Y., 2018, , in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 479-488.
- Nunes, U. M., & Demiris, Y., 2018, , in BMVC, Vol. 6, No. 7, p. 19.
- Choi, J., Jin Chang, H., Yun, S., Fischer, T., Demiris, Y., & Young Choi, J, 2017, , in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4807-4816.
- Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Y, 2016, , IEEE Conference on Computer Vision and Pattern Recognition, pp:4216-4425.
- Choi, J., Chang, H. J., Jeong, J., Demiris, Y., & Choi, J. Y., 2016, , in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4321-4330.
- Chang HJ, Demiris Y, 2015, , IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp:3138-3146
We are interested in learning and maintaining multiscale human models to personalise the contextual and temporal appropriateness of the assistance our robots provide – “how should we help this person, and when”. We typically represent humans at multiple levels of abstraction: from how they move, i.e., their spatiotemporal trajectory representations using statistical and neural network representations, to how they solve tasks, i.e., their action sequences using context-free grammatical representations, and how they use assistive equipment. Thus, we term our models “multiscale” models.
Key Publications:
- Y. Zhong and Y. Demiris, 2024, , in Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, No. 9, pp. 10270-10278.
- Y. Zhong, F. Zhang and Y. Demiris, 2023, in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5.
- Y. Gao, H. J. Chang and Y. Demiris, 2020, , in IEEE Access, vol. 8, pp. 45700-45714.
- F. Zhang, A. Cully and Y. Demiris, 2019, , in IEEE Transactions on Robotics, vol. 35, no. 4, pp. 873-888.
- A. Cully and Y. Demiris, 2019, , in IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 10, pp. 2000-2013.
- F. Zhang, A. Cully and Y. Demiris, 2017, , 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3603-3610.
- Georgiou, T., & Demiris, Y., 2017., , User Modeling and User-Adapted Interaction, 27(2), 267-311.
- Yixing Gao, Hyung Jin Chang and Y. Demiris, 2015, User modelling for personalised dressing assistance by humanoid robots, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1840-1845.
- Lee, K., Su, Y., Kim, T. K., & Demiris, Y., 2013, , Robotics and Autonomous Systems, 61(12), 1323-1334.
We are researching the learning processes that allow humans and robots to acquire and represent sensorimotor skills. Our research spans several representational paradigms: from embodiment-oriented statistical and neural representations, to more cognition-oriented ontological and knowledge-graph-based representations. We use active learning (motor and goal babbling and exploration) and social learning (e.g. human observation and imitation) to expand the range of skills our robots have.
Key Publications:
- Zhang, F., & Demiris, Y, 2022, , Science robotics, 7(65), eabm6010.
- H. Razali and Y. Demiris, 2021, , 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7315-7320.
-
Zambelli, M., Cully, A., & Demiris, Y, 2020, , Robotics and Autonomous Systems, 123, 103312.
- Korkinof, D., & Demiris, Y, 2017, , Pattern Recognition, 66, 190-201.
- M. Petit, T. Fischer and Y. Demiris, 2016, , in IEEE Transactions on Cognitive and Developmental Systems, vol. 8, no. 3, pp. 201-213.
- A. Ribes, J. Cerquides, Y. Demiris and R. Lopez de Mantaras, 2016, , in IEEE Transactions on Cognitive and Developmental Systems, vol. 8, no. 1, pp. 26-41.
- H. Soh and Y. Demiris, 2015, , in IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 3, pp. 522-536.
- H. Soh and Y. Demiris, 2014, , in IEEE Transactions on Haptics, vol. 7, no. 4, pp. 512-525.
- Wu, Y., Su, Y., & Demiris, Y., 2014, , Robotics and Autonomous Systems, 62(10), 1517-1530.
How can robots plan and execute actions in an optimal, safe and trustworthy manner? We research robot motion generation algorithms in individual and collaborative tasks, paying particular attention to how control can be shared between human and robot collaborators. Apart from fundamental issues in human-robot shared/collaborative control, we are also interested in the interaction between multiple humans and robots, for example in triadic interactions (for example, an assistive robot, an assisted person and their carer)
Key Publications:
- F. Zhang and Y. Demiris, 2022, , in Sci. Robot. 7, eabm6010
- V. Girbés-Juan, V. Schettino, L. Gracia et al., 2022, , in J Multimodal User Interfaces 16, 219–238
- V. Girbés-Juan, V. Schettino, Y. Demiris and J. Tornero, 2021, , in IEEE Transactions on Haptics, vol. 14, no. 1, pp. 44-56, 1 Jan.-March 2021
- T. Fischer, J. Y. Puigbò, D. Camilleri,P. D. H. Nguyen,C. Moulin-Frier, S. Lallée, G. Metta, T. J. Prescott, Y. Demiris andP. F. M. J. Verschure, 2018, , in Front. Robot. AI 5:22
- V. Schettino and Y. Demiris, 2020, , in IEEE International Conference on Robotics and Automation (ICRA), pp. 5474-5480
- A. Kucukyilmaz and Y. Demiris, 2018, , in IEEE Transactions on Haptics, vol. 11, no. 3, pp. 431-442
- H. Soh and Y. Demiris, , in J. Hum.-Robot Interact. 4, 3 (December 2015), 76–100.
- Y. Su, Y. Wu, H. Soh, Z. Du and Y. Demiris, 2013, , in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2493-2499
Virtual and Augmented Reality Interfaces have great potential for enhancing the interaction between humans and complex robot systems. Our research investigates how visualising and interacting with mixed reality information (for example dynamic signifiers, or dynamically- determined affordances) can facilitate human collaboration through enhanced explainability, and fluidity and efficiency of control.
KEY PUBLICATIONS:
- R. Chacón-Quesada and Y. Demiris, 2022, in IEEE Robotics & Automation Magazine, vol. 29, no. 1, pp. 22-34
- R. Chacón-Quesada and Y. Demiris, 2020, , in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11439-11444
- M. Zolotas and Y. Demiris, 2019, , in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3020-3026
- R. Chacón-Quesada and Y. Demiris, 2019, , in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4812-4818
- J. Elsdon and Y. Demiris, 2018, , in IEEE International Conference on Robotics and Automation (ICRA), pp. 1939-1946
- J. Elsdon and Y. Demiris, 2017, , in IEEE International Conference on Robotics and Automation (ICRA), pp. 4891-4897
As robots become more integrated into our lives, it's crucial they understand and respect privacy and trust. We focus on three key areas: 1) enabling robots to gauge human trust and adapt their behaviour accordingly, 2) designing robots with clear and understandable decision-making processes, and 3) ensuring they learn personalised behaviours without compromising user privacy.
KEY PUBLICATIONS:
- C. Goubard and Y. Demiris, 2024, in 2024 IEEE International Conference on Robotics and Automation (ICRA).
- F. Estevez Casado and Y. Demiris, 2022, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9326-9331.
Caregiving robotics, as a subset of assistive robotics, are developed to provide support for care-related tasks for elderly individuals and people with mobility impairments, aiming to improve the quality of their lives and independence while also reducing the workload on human caregivers.
KEY PUBLICATIONS:
- Y. Gu and Y. Demiris, 2024, , in IEEE International Conference on Intelligent Robots and Systems (IROS).
- S. Kotsovolis and Y. Demiris, 2024, , in 2024 IEEE International Conference on Robotics and Automation (ICRA).
- Y. Gu and Y. Demiris, 2024, , in IEEE Robotics and Automation Letters, vol. 9, no. 6, pp. 5751-5758
- S. Kotsovolis and Y. Demiris, 2023, , in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9865-9871.
- Zhang, F., & Demiris, Y, 2022, , Science robotics, 7(65), eabm6010.
- Y. Gao, H. J. Chang and Y. Demiris, 2020, , in IEEE Access, vol. 8, pp. 45700-45714.
- F. Zhang, A. Cully and Y. Demiris, 2019, , in IEEE Transactions on Robotics, vol. 35, no. 4, pp. 873-888.
- F. Zhang, A. Cully and Y. Demiris, 2017, , 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3603-3610.