Skip to main content

Institute for Artificial Intelligence and Autonomous Systems (A2S)

b-it lecture series: Invited talk by Prof. Marta Romeo

Prof. Marta Romeo

Date

Thursday, 07 November 2024

Time

10:45 - 12:15

Online event

Webex

We are happy to invite you to an invited talk by Prof. Marta Romeo from Heriot-Watt University in the context of the MAS course Human-Centred Interaction in Robotics (HCIR) with the title "Integrating Multimodal Signals in Human-Robot Interaction: Progress and Future Directions"

Abstract

Effective communication in human interactions is facilitated by our ability to seamlessly integrate signals from various bodily articulators with spoken language. These signals help manage interactions, such as indicating when to take turns speaking, and enhance our understanding of social aspects like emotions and intentions. This capacity to perceive, comprehend, and integrate a wide range of signals is a key trait that defines us as social beings. Therefore, when developing social robots intended for integration into our social environments, it is crucial to equip them with similar capabilities. Over the years, human-robot interaction has utilized techniques from linguistics, natural language processing, computer vision, and social signal processing to achieve comparable levels of signal integration in robotic companions. In this talk, we will explore the progress made in this field, considering human factors and advancements in AI solutions.

 

Short Bio

Dr Marta Romeo is an Assistant Professor and Bicentennial Research Leader in Human-Robot Interaction at the Computer Science department in the School of MACS (Mathematical and Computer Sciences) at Heriot-Watt University, and she is affiliated with the National Robotarium. She got her bachelor's and master's degrees in engineering of computing systems at Politecnico di Milano (Italy). She earned her PhD from the University of Manchester on human-robot interaction and deep learning for companionship in elderly care, working on the H2020 Project MoveCare (Multiple-Actors Virtual Empathic Caregiver for the Elder). She then stayed at the University of Manchester as a postdoc working for the UKRI Node on Trust, investigating how trust in human-robot interactions is built, maintained and recovered when lost. Her research focuses on developing socially intelligent robots, adapting to their users to increase acceptability and usability. She has worked on multiparty human-robot interaction within the European project SPRING, and she is PI of the TAS-GAIL project (Go Ahead I am Listening) looking at developing a robotic active listener. She is interested in human-robot interaction, social robotics, failures and repairs in interactions between humans and robots, and healthcare technologies.

 

References

  • Stolarz M, Mitrevski A, Wasil M, Plöger PG. Learning-based personalisation of robot behaviour for robot-assisted therapy. Front Robot AI. 2024 Apr 8;11:1352152. doi: 10.3389/frobt.2024.1352152. PMID: 38651054; PMCID: PMC11033723.
  • Alameda-Pineda, X., Addlesee, A., García, D.H., Reinke, C., Arias, S., Arrigoni, F., Auternaud, A., Blavette, L., Beyan, C., Camara, L.G. and Cohen, O., 2024. Socially Pertinent Robots in Gerontological Healthcare. arXiv preprint arXiv:2404.07560.
  • Addlesee, A., Cherakara, N., Nelson, N., Hernández García, D., Gunson, N., Sieińska, W., Romeo, M., Dondrup, C. and Lemon, O., 2024, March. A multi-party conversational social robot using LLMS. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (pp. 1273-1275).
  • Aylett, M.P. and Romeo, M., 2023, July. You Don’t Need to Speak, You Need to Listen: Robot Interaction and Human-Like Turn-Taking. In Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1-5).
  • Drijvers, L. and Holler, J., 2023. The multimodal facilitation effect in human communication. Psychonomic Bulletin & Review, 30(2), pp.792-801.
  • Mazzola, C., Romeo, M., Rea, F., Sciutti, A. and Cangelosi, A., 2023. To whom are you talking? a deep learning model to endow social robots with addressee estimation skills. arXiv preprint arXiv:2308.10757.
  • Romeo, M., Hernández García, D., Han, T., Cangelosi, A. and Jokinen, K., 2021. Predicting apparent personality from body language: benchmarking deep learning architectures for adaptive social human–robot interaction. Advanced Robotics, 35(19), pp.1167-1179.
  • Romeo, M., Hernández García, D., Jones, R. and Cangelosi, A., 2019, September. Deploying a Deep Learning Agent for HRI with Potential" end-users" at Multiple Sheltered Housing Sites. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 81-88).

Contact

20230403_fbinf_Hassan_Teena_001

Teena Chakkalayil Hassan

Professor

Location

Sankt Augustin

Room

C 216

Address

Grantham-Allee 20

53757 Sankt Augustin

Telephone

+49 2241 865 9608