Back to Home

All Publications

πŸ“… 2026 ICRA 2026

A Modular, Wireless and Wearable Biosignal Acquisition Platform

Authors: Mohamad Reza Shahabian Alashti, Co-authors TBD
Published In ICRA 2026
Year 2026
Abstract
We present a modular, wireless biosignal acquisition platform designed to enable scalable electromyography (EMG) and inertial measurement unit (IMU) sensing for wearable robotics applications. The system supports up to 64 EMG channels and integrates a 9-axis IMU, leveraging a distributed Leader-Follower board architecture. In this work, we demonstrate synchronised acquisition of 32 EMG channels together with IMU motion data in a fully wireless setup. The embedded firmware ensures low-latency, high-fidelity streaming at 1.4 kHz over a 2.4-GHz industrial, scientific and medical (ISM) band link. Benchmarking shows that the platform maintains uniformly strong performance across noise, power, footprint, bandwidth, and scalability, in contrast to existing designs that optimize only a single metric. Experimental demonstrations confirm reliable acquisition of high-density EMG and IMU signals across functional activities, highlighting the device’s robustness and wearability. The proposed system provides a compact and flexible solution for intent-aware wearable technologies, with applications in assistive exosuits, rehabilitation, and human–robot interaction.

Paper under review for the International Conference on Robotics and Automation (ICRA) 2026.

This work presents a comprehensive platform for wireless biosignal acquisition, designed to support research in wearable robotics and human-machine interaction.

πŸ“… 2025 ICSR 2025

Towards Memory-Driven Agentic AI for Human Activity Recognition

Authors: Mohamad Reza Shahabian Alashti, Khashayar Ghamati, Hooman Samani, Abolfazl Zaraki
Published In ICSR 2025
Year 2025
Abstract
This paper proposes a novel, scalable agentic AI architecture designed to enhance human activity recognition across data modalities by embedding memory-driven reasoning and context awareness. The architecture integrates multimodal sensing, deliberative reasoning through supervised learning and context-aware language models, and memory mechanisms, including short-term memory for tracking immediate activity transitions and long-term memory for embedding experiential knowledge. The evaluation of the proposed model using two major datasets namely RHM (6.7K video clips of 14 known activities) and Toyota Smart Home (16K video clips of 31 unknown activities) demonstrates significant improvements, achieving 60% accuracy when combining contextual information with supervised model output, compared to 40% accuracy with context alone and 35% with supervised models on unseen data. By overcoming the limitations of traditional HAR approaches, this research advances the development of responsive and intelligent robotic systems, facilitating more natural and effective human-robot collaboration.

Official page: University of Hertfordshire Research Profiles

Talk video: YouTube

Accepted for publication at the International Conference on Social Robotics (ICSR) 2025.

This paper introduces novel concepts from agentic AI to human activity recognition, proposing memory-driven systems that can adapt their behavior based on context and user history.

πŸ“… 2024 BioRob 2024

Efficient Skeleton-based Human Activity Recognition in Ambient Assisted Living Scenarios with Multi-view CNN

Authors: Mohamad Reza Shahabian Alashti, Mohammad Bamorovat Abadi, Patrick Holthaus, Catherine Menon, Farshid Amirabdollahian
Published In BioRob 2024
Year 2024
Abstract
Human activity recognition (HAR) plays a critical role in diverse applications and domains, from assessments of ambient assistive living (AAL) settings and the development of smart environments to human-robot interaction (HRI) scenarios. However, using mobile robot cameras in such contexts has limitations like restricted field of view and possible noise. Therefore, employing additional fixed cameras can enhance the field of view and reduce susceptibility to noise. Nevertheless, integrating additional camera perspectives increases complexity, a concern exacerbated by the number of real-time processes that robots should perform in the AAL scenario. This paper introduces our methodology that facilitates the combination of multiple views and compares different aspects of fusing information at low, medium and high levels. Their comparison is guided by parameters such as the number of training parameters, floating-point operations per second (FLOPs), training time, and accuracy. Our findings uncover a paradigm shift, challenging conventional beliefs by demonstrating that simplistic CNN models outperform their more complex counterparts using this innovation. Additionally, the pivotal role of pipeline and data combination emerges as a crucial factor in achieving better accuracy levels. In this study, integrating the additional view with the Robot-view resulted in an accuracy increase of up to 25 %. Ultimately, we have successfully attained a streamlined and efficient multi-view HAR pipeline, which will now be incorporated into AAL interaction scenarios.

Official page: IEEE Xplore

Talk video: YouTube

Published at the 10th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob).

This work demonstrates how multi-view skeleton data can be effectively processed using convolutional neural networks for real-time activity recognition in smart home environments.

πŸ“… 2024 BioRob 2024

Robotic Vision and Multi-View Synergy: Action and Activity Recognition in Assisted Living Scenarios

Authors: Mohammad Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, Farshid Amirabdollahian
Published In BioRob 2024
Year 2024
Abstract
The significance of Human-Robot Interaction (HRI) is increasingly evident when integrating robotics within human-centric settings. A crucial component of effective HRI is Human Activity Recognition (HAR), which is instrumental in enabling robots to respond aptly in human presence, especially within Ambient Assisted Living (AAL) environments. Since robots are generally mobile and their visual perception is often compromised by motion and noise, this paper evaluates methods by merging the robot's mobile perspective with a static viewpoint utilising multi-view deep learning models. We introduce a dual-stream Convolutional 3D (C3D) model to improve vision-based HAR accuracy for robotic applications. Utilising the Robot House Multiview (RHM) dataset, which encompasses a robotic perspective along with three static views (Front, Back, Top), we examine the efficacy of our model and conduct comparisons with the dual-stream ConvNet and Slow-Fast models. The primary objective of this study is to enhance the accuracy of robot viewpoints by integrating them with static views using dual-stream models. The metrics for evaluation include Top-1 and Top-5 accuracy. Our findings reveal that the integration of static views with robotic perspectives significantly boosts HAR accuracy in both Top-1 and Top-5 metrics across all models tested. Moreover, the proposed dual-stream C3D model demonstrates superior performance compared to the other contemporary models in our evaluations.

Official page: IEEE Xplore

Published at the 10th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob).

This paper explores how multiple camera viewpoints can be integrated with robotic perception systems to achieve more reliable activity recognition for elderly care applications.

πŸ“… 2023 ACHI 2023

Lightweight Human Activity Recognition for Ambient Assisted Living

Authors: Mohamad Reza Shahabian Alashti, Mohammad Bamorovat Abadi, Patrick Holthaus, Catherine Menon, Farshid Amirabdollahian
Published In ACHI 2023
Year 2023
Abstract
Ambient assisted living (AAL) systems aim to improve the safety, comfort, and quality of life for the populations with specific attention given to prolonging personal independence during later stages of life. Human activity recognition (HAR) plays a crucial role in enabling AAL systems to recognise and understand human actions. Multi-view human activity recognition (MV-HAR) techniques are particularly useful for AAL systems as they can use information from multiple sensors to capture different perspectives of human activities and can help to improve the robustness and accuracy of activity recognition. In this work, we propose a lightweight activity recognition pipeline that utilizes skeleton data from multiple perspectives to combine the advantages of both approaches and thereby enhance an assistive robot's perception of human activity. The pipeline includes data sampling, input data type, and representation and classification methods. Our method modifies a classic LeNet classification model (M-LeNet) and uses a Vision Transformer (ViT) for the classification task. Experimental evaluation on a multi-perspective dataset of human activities in the home (RH-HAR-SK) compares the performance of these two models and indicates that combining camera views can improve recognition accuracy. Furthermore, our pipeline provides a more efficient and scalable solution in the AAL context, where bandwidth and computing resources are often limited.

Official page: University of Hertfordshire Research Profiles

This work is published at the Sixteenth International Conference on Advances in Computer-Human Interactions and the focuses on model compression and optimization techniques to enable real-time HAR on resource-constrained devices commonly found in smart home environments.

Key contributions include:

  • Efficient network architectures for edge deployment
  • Knowledge distillation for model compression
  • Real-time performance on embedded devices
  • Maintained accuracy with reduced computational requirements
πŸ“… 2023 ACHI 2023

RHM: Robot House Multi-view Human Activity Recognition Dataset

Authors: Mohammad Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, Farshid Amirabdollahian
Published In ACHI 2023
Year 2023
Abstract
With the recent increased development of deep neural networks and dataset capabilities, the Human Action Recognition (HAR) domain is growing rapidly in terms of both the available datasets and deep models. Despite this, there are some lacks at datasets specifically covering the Robotics field and Human-Robot interaction. We prepare and introduce a new multi-view dataset to address this. The Robot House Multi-View dataset (RHM) contains four views: Front, Back, Ceiling, and Robot Views. There are 14 classes with 6701 video clips for each view, making a total of 26804 video clips for the four views. The lengths of the video clips are between 1 to 5 seconds. The videos with the same number and the same classes are synchronized in different views. In the second part of this paper, we consider how single streams afford activity recognition using established state-of-the-art models. We then assess the affordance for each of the views based on information theoretic modelling and mutual information concept. Furthermore, we benchmark the performance of different views, thus establishing the strengths and weaknesses of each view relevant to their information content and performance of the benchmark. Our results lead us to conclude that multi-view and multi-stream activity recognition has the added potential to improve activity recognition results.

Official page: University of Hertfordshire Research Profiles

Dataset page: Robot House Multiview Human Activity Recognition Dataset

A comprehensive RGB video dataset including:

  • Multiple camera viewpoints covering entire living spaces
  • Natural activities performed in realistic home settings
  • Long-duration recordings capturing activity variations
  • Comprehensive annotations and metadata
  • Suitable for various computer vision tasks

The Robot House provides a unique realistic testbed for ambient assisted living research, and this dataset captures genuine human behaviors in that environment.

πŸ“… 2023 ACHI 2023

RHM-HAR-SK: A multi-view dataset with skeleton data for Ambient Assisted Living Research

Authors: Mohamad Reza Shahabian Alashti, Mohammad Bamorovat Abadi, Patrick Holthaus, Catherine Menon, Farshid Amirabdollahian
Published In ACHI 2023
Year 2023
Abstract
Ambient assisted living (AAL) systems aim to improve the safety, comfort, and quality of life for the populations with specific attention given to prolonging personal independence during later stages of life. Human activity recognition (HAR) plays a crucial role in enabling AAL systems to recognise and understand human actions. Multi-view human activity recognition (MV-HAR) techniques are particularly useful for AAL systems as they can use information from multiple sensors to capture different perspectives of human activities and can help to improve the robustness and accuracy of activity recognition. In this work, we propose a lightweight activity recognition pipeline that utilizes skeleton data from multiple perspectives to combine the advantages of both approaches and thereby enhance an assistive robot's perception of human activity. The pipeline includes data sampling, input data type, and representation and classification methods. Our method modifies a classic LeNet classification model (M-LeNet) and uses a Vision Transformer (ViT) for the classification task. Experimental evaluation on a multi-perspective dataset of human activities in the home (RH-HAR-SK) compares the performance of these two models and indicates that combining camera views can improve recognition accuracy. Furthermore, our pipeline provides a more efficient and scalable solution in the AAL context, where bandwidth and computing resources are often limited.

Official page: University of Hertfordshire Research Profiles

A dataset page is available here: Robot House RHM-HAR-SK

A publicly available dataset featuring:

  • Multi-view synchronized skeleton sequences
  • Diverse activities relevant to elderly care
  • Multiple subjects with varied demographics
  • High-quality 3D pose annotations
  • Benchmark evaluation protocols

The dataset has been used by multiple research groups for developing and evaluating HAR algorithms in assisted living contexts.

πŸ“… 2022 Alan Turing Institute Report

Data augmentation and synthetic data generation for low-frequency and sparse data problems

Authors: Mohamad Reza Shahabian Alashti, Alan Turing Institute Team, AMRC Collaborators
Published In Alan Turing Institute Report
Year 2022
Abstract
The Advanced Manufacturing Research Centre (AMRC) group is part of the UK’s High Value Manufacturing (HVM) Catapult, whose mission is to accelerate the concepts-to-commercial-reality process and create a sustainable future for high-value manufacturing. Many manufacturers rely on the manufacture of a small number of high-value workpieces. In contrast to high-volume production, any workpieces rejected in high-value manufacturing represent a large individual investment of resources. Nonetheless, the reasons for rejection are often difficult to determine, which hinders the improvement of the manufacturing process. The reason for this difficulty is the lack of data available on the manufactured workpiece - due to it undergoing several processes, the scarcity of data collected, and the limited sample size in low-volume, high-value manufacturing. Early prediction of workpiece failure would increase productivity and reduce waste by an early stop of the manufacturing process. The workpieces of interest for this Data Study Group are high-value, low-yield ones since they are either made from expensive materials or undergo expensive treatments. Thus, any improvements that lead to fewer rejected pieces will be of high business value and save resources. AMRC has recently collected data on 16 such products, from their forging through machining to their final quality check. This challenge aims to explore the potential of this novel dataset for high-value, low-yield research with a particular emphasis on failure prediction, data augmentation, and data viability.

Technical report from the Data Study Group collaboration with the Alan Turing Institute and Advanced Manufacturing Research Centre.

Key contributions:

  • Novel augmentation strategies for sparse datasets
  • Synthetic data generation preserving statistical properties
  • Validation approaches for augmented data
  • Application to real-world manufacturing challenges
  • Guidelines for practitioners working with limited data
πŸ“… 2021 4th UKRAS21 Conference: Robotics at home Proceedings

Human activity recognition in RoboCup@ home: Inspiration from online benchmarks

Authors: Mohamad Reza Shahabian Alashti, Mohammad Bamorovat Abadi, Patrick Holthaus, Catherine Menon, Farshid Amirabdollahian
Published In 4th UKRAS21 Conference: Robotics at home Proceedings
Year 2021
Abstract
Human activity recognition is an important aspect of many robotics applications. In this paper, we discuss how well the RoboCup@home competition accounts for the importance of such recognition algorithms. Using public benchmarks as an inspiration, we propose to add a new task that specifically tests the performance of human activity recognition in this league. We suggest that human-robot interaction research in general can benefit from the addition of such a task as RoboCup@home is considered to accelerate, regulate, and consolidate the field.

Official page: University of Hertfordshire Research Profiles

This work established standardized benchmarks and evaluation protocols for HAR in home environments, facilitating fair comparison across different approaches and promoting reproducible research.

πŸ“… 2021 4th UKRAS21 Conference: Robotics at home Proceedings

Affordable Robot Mapping using Omnidirectional Vision

Authors: Mohammad Bamorovat Abadi, Mohamad Reza Shahabian Alashti, Patrick Holthaus, Catherine Menon, Farshid Amirabdollahian
Published In 4th UKRAS21 Conference: Robotics at home Proceedings
Year 2021
Abstract
Mapping is a fundamental requirement for robot navigation.In this paper, we introduce a novel visual mapping method that relies solely on a single omnidirectional camera.We present a metric that allows us to generate a map from the input image by using a visual Sonar approach.The combination of the visual sonars with the robot's odometry enables us to determine a relation equation and subsequently generate a map that is suitable for robot navigation.Results based on visual map comparison indicate that our approach is comparable with the established solutions based on RGB-D cameras or laser-based

Official page: UH Research Archive (UHRA)

This work demonstrates how affordable omnidirectional cameras can be used for simultaneous localization and mapping (SLAM), reducing the cost barrier for robotics research and education.

πŸ“… 2018 6th RSI International Conference on Robotics and Mechatronics (IcRoM)

Automatic ROI Detection in Lumbar Spine MRI

Authors: Mohamad Reza Shahabian Alashti, Mohammad Reza Daliri, Behnam Jamei
Published In 6th RSI International Conference on Robotics and Mechatronics (IcRoM)
Year 2018
Abstract
Low back pain (LBP) is one of the most common diseases affecting a large number of people. Diagnosis and treatment of LBP require quick, accurate imaging methods. Magnetic resonance imaging (MRI) is effective in distinguishing between vertebra, intervertebral disc and spinal cord, and thus is used frequently in spinal cord injury (SCI) diagnosis. This paper proposes a fully automated approach to detecting region of interest (ROI) using T2-weighted MRI images. Our dataset included the cases of 100 patients who suffered from LBP. In total, 2000 axial and 1200 sagittal ROI were marked in the Lumbar spine. Extracted ROIs were used in the cascade classifier learner. In this method, ROI detection consists of two processes. First the ROIs are specified using the cascade classifier, and then via a process, non-regions of interest (NROIs) are discarded. Histogram of Oriented Gradient (HOG) was used as the feature descriptor in each stage of the Cascade classifier. This method does not require background knowledge of input images and it is reliable regardless of the images size, contrast and clinical abnormally of cases. The quantitative and qualitative evaluation results of the proposed ROI detector were 83% and above 94%, respectively.

Official page: IEEE Xplore

This work from the MSc research applies machine learning and image processing techniques to automatically identify relevant anatomical structures in medical imaging, assisting radiologists in diagnosis and treatment planning.

πŸ“… 2017 5th RSI International Conference on Robotics and Mechatronics (ICRoM)

FARAT1: An Upper Body Exoskeleton Robot

Authors: Farzad Cheraghpour, Farbod Farzad, Milad Shahbabai, Mohamad Reza Shahabian Alashti
Published In 5th RSI International Conference on Robotics and Mechatronics (ICRoM)
Year 2017
Abstract
PExoskeleton robots were designed to increase strength and endurance of human limbs. This kind of robots could be used to increase the physical ability of either disabled or ordinary people for executing motion or manipulation tasks. The important point is to design such a shape that could be used safely, and accurately. This function could assist in walking, running, jumping or lifting objects that are beyond the human abilities to carry. In this paper, an upper body exoskeleton robot for rehabilitation applications, called FARAT1, is presented. This exoskeleton could be used for physiotherapy of whole arm of a patient, when the physiotherapist wears the MYO armband device and performs predefined actions. So the design process of the main parts including biomechanical modeling, conceptual design aspects, loading analysis and stress analysis of the hand are presented. The manufacturing points including 3D printing of the main parts are explained and final prototype of the robot with control instruments and design mobile application for control are shown.

Official page: IEEE Xplore

Part of the work at SYNTECH Technology & Innovation Center, this project developed a wearable exoskeleton for upper body assistance and rehabilitation, incorporating advanced mechatronics and control systems.

πŸ“… 2017 Artificial Intelligence and Robotics (IRANOPEN)

Mechanical Basic and Detailed Design for the Redundant Arm SAAM applied on a Domestic Service Robot

Authors: Farzad Cheraghpour Samavati, Majid Iranikhah, Parastoo Dastangoo, Mohamad Reza Shahabian Alashti
Published In Artificial Intelligence and Robotics (IRANOPEN)
Year 2017
Abstract
In this paper we describe design and manufacturing process of mechanical robotic arm SAAM (seven axis anthropomorphic manipulator). The main goal is designing a suitable arm for use as a service robot arm in the home environment. Design is described in the five step: Loading analysis, Stress Analysis, Material Selection, Failure Theory consideration, Safety Factor calculation. All steps calculation are done for each part of the arm. Finally, the process of manufacturing the arm is explained and the real sample of the arm is manufactured.

Official page: IEEE Xplore

Comprehensive design work for the SAAM (7-DoF) robotic arm used in domestic service robots, covering:

  • Kinematic analysis and workspace optimization
  • Mechanical component selection and design
  • Manufacturing and assembly procedures
  • Integration with mobile robot platforms
  • Control system architecture

This arm was successfully deployed on service robots competing in RoboCup @Home league.