Shuai Ma | 马帅


Research

Personalization/User Modeling and Interactive Machine/Deep Learning

Every user has a difference from others. In many subjective tasks, users have significant differences and preferences. To help users do a task, a general model is usually built using ML or DL methods. However, these models are trained by data from all kinds of users, which is not suitable for a specific user. So, modeling users' personalities or preferences can be important. How to make a general model suitable for a specific user is an interesting research question.
SmartEye: Assisting Instant Photo Taking via Integrating User Preference with Deep View Proposal Network

Shuai Ma, Zijun Wei, Feng Tian, Xiangmin Fan, Jianming Zhang, Xiaohui Shen, Zhe Lin, Jin Huang, Radomír Měch, Dimitris Samaras, Hongan Wang (CHI 2019) [PDF]

Honorable Mention Award

Taking a high-quality photo needs composition skill which non-expert users lack. We present SmartEye, a novel mobile system to help users take photos with good compositions in-situ. SmartEye integrates the View Proposal Network (VPN), a deep learning-based model that outputs composition suggestions in real-time, and a novel, interactively updated module (P-Module) that adjusts the VPN outputs to account for personalized composition preferences.


User Adaptive Modeling in 2D Moving Target Selection

[To appear]

Our previous work has proposed a method to help 2D moving target selection. However, it used a general model trained by all users' data collected. When a new user uses our tool, it may be not suitable for him. So we designed a personalized user modeling method to adapt the general model to a specific user.


Human-computer interaction in Healthcare

Many early stages of the disease are difficult to detect, but some symptoms can be sensitively captured by sensors. So we developed some methods of human-computer interaction to capture users' limb movement through mobile phones, Kinect and other devices, and collect a large number of data of normal users and patients, so as to achieve an assistant diagnosis of diseases.
Implicit Detection of Motor Impairment in Parkinson’s Disease from Everyday Smartphone Interactions

Jing Gao, Feng Tian, Junjun Fan, Dakuo Wang, Xiangmin Fan, Yicheng Zhu, Shuai Ma, Jin Huang, Hongan Wang (CHI 2018 Poster) [PDF]

In this work, we explored the feasibility and accuracy of detecting motor impairment in Parkinson’s disease (PD) via implicitly sensing and analyzing users’ everyday interactions with their smartphones. Through a 42 subjects study, our approach achieved an overall accuracy of 88.1% (90.0%/86.4% sensitivity/specificity) in discriminating PD subjects from age-matched healthy controls.


Identifying Gait Abnormality with a Single Click

Jin Huang, Shuai Ma, Feng Tian, Xiang Li, Jie Liu, Hongan Wang (SCIENCE CHINA) [To appear]

Gait abnormality is one of the major symptoms of nervous system diseases such as Parkinson’s disease. In the clinic, assessments tools usually require patients to complete a long and tedious testing process under the supervision of a doctor, which is tremendous pressure for both patients and hospitals. We propose a novel system, which integrates identity recognition algorithm, behavior recognition algorithm, and built-in gait detection model to accelerate the clinical diagnosis process.


Human-AI Interaction in Healthcare: Three Case Studies About How Patient(s) And Doctors Interact with AI in a Multi-Tiers Healthcare Network

Yunzhi Li, Liuping Wang, Shuai Ma, Xiangmin Fan, Zijun Wang, Junfeng Jiao, Dakuo Wang (CHI 2019 Workshop) [PDF]

We presents three ongoing research projects that aim to study how to design, develop, and evaluate the systems supporting human-AI interaction in the healthcare domain. Collaborating with the local government administrators, hospitals, clinics and doctors, we get a valuable opportunity to study and improve how AI-empowered technologies are changing people's life in providing or receiving healthcare services in a suburb district in Beijing, China. We hope this work will ground the discussion with other participants in the workshop and build further collaborations with the health informatics community.


Video Interaction

To further utilize the video resources, we developed some interactive methods for video viewing, video editing, MOOC learning, etc.
Co-Lighter: Promoting Video Watching by Crowd Suggestion on Specific Content

[To appear]

In this research, we first conduct a preliminary survey among 114 participants to investigate the limits in current video watching modes. Then, based on the survey results, we present Co-Lighter, a novel tool for video viewing and comment. Co-Lighter supports viewers to watch videos and share feelings about video content in a collaborative way.


Interaction Technique for Daily Life

Interaction is everywhere in our daily life. What can it do to create a better life?
mirrorU: Scaffolding Emotional Reflection via In-Situ Assessment and Interactive Feedback

Liuping Wang, Xiangmin Fan, Feng Tian, Lingjia Deng, Shuai Ma, Jin Huang, Hongan Wang (CHI 2018 Poster) [PDF]

We present mirrorU, a mobile system that supports users to reflect on and write about their daily emotional experience. While prior work has focused primarily on providing memory triggers or affective cues, mirrorU provides in-situ assessment and interactive feedback to scaffold reflective writing.


Interaction Technique for Touch and Design

We proposed some systems for design and gesture recognition.
Upcycle-Chic: A Software Tool for Ideating Furniture Upcycling Design

[To appear]

We present Upcycle-Chic, a design and visualization environment that allows a user to view possible upcycling solutions for a given piece of old furniture and explore varying design variations. These possible solutions are generated based on design strategies drawn from over 1000 examples on the web and books shared by professionals and hobbyist furniture makers.


Chronos: Improving Recognizers’ Performance by Leveraging Gesture Continuity and Designers’ Involvement

[To appear]

We present Chronos, an algorithm framework that improves the performance of gesture recognizers by 1) extracting the continuity information from gesture sequences and, 2) enabling designers to optimize the decision-making rewards. The framework is implemented by integrating a dynamic Bayesian network (DBN) with a partially observable Markov decision process (POMDP).


Human Engagement and Trust with AI

When collaborating with AI, how will users feel and do they trust AI? To investigate these questions, we did some interesting research.
Investigating Pregnant Women' Engagement When Getting Emotion Support from a Chatbot

[To appear]

Pregnant women are seeking emotional support in forums. How would they feel if they were replied by a chatbot? We are investigating users’ engagement when getting emotional support in a pregnant related forum where users don’t know whether the comments are replied by a real user or a chatbot. To build the chatbot, we designed a seq-to-seq model to generate diverse comments based on posts. We Hide the robot in the community to reply to the user's posts, and then evaluated users' engagement.