Research

Auditory Displays & Sonification

Immersive Interactive Sonification Platform

Taking embodied cognition and interactive sonification into account, we have developed an immersive interactive sonification platform, "iISop" at Immersive Visualization Studio (IVS) at Michigan Tech. 12 Vicon cameras around the studio wall track users' location, movement, and gesture and then, the sonification system generates speech, music, and sounds in real-time based on those tracking data. 24 multivisions display visualization of the data. With a fine-tuned gesture-based interactive sonification system, a performing artist (Tony Orrico) made digitalized works on the display wall as well as penwald drawings on the canvas. Users can play the piano by hopping with a big virtual piano that responds to their movements. We are also conducting artistic experiments with robots and puppies. Currently, we are focusing on making more accessible mapping software between motion and sound, and a portable version of this motion tracking and sonification system. Future works include implementing natural user interfaces and a sonification-inspired story-telling system for children. This project has been partly supported by the Department of Visual and Performing Arts at Tech, the International School of Art & Design at Finlandia University, and the Superior Ideas Citemd Funding.

* iISoP comes from a Greek writer, Aesop.

iiSoP

Dancer Sonification

Dance-based Sonification

Traditionally, dancers choreograph based on music. On the contrary, the ultimate goal of this project is to have dancers improvise music and visuals by their dancing. Dancers still play an expected role (dance), but simultaneously integrate unexpected roles (improvise music and visuals). From the traditional perspective, this might embarrass dancers and audience, but certainly adds aesthetic dimensions to their work. In this project, we adopted emotions and affect as the medium of communication between gestures and sounds. To maximize affective gesture expression, expert dancers have been recruited to dance, move, and gesture inside the iISoP system while being given both visual and auditory outputs in real time. A combination of Laban Movement Analysis and affective gesturing was implemented for the sonification parameter algorithms.


Dancing Drone for Interactive Visualization and Sonification

The recent dramatic advance of machine learning and artificial intelligence has also propelled robotics research into a new phase. Even though it is not a main stream of robotics research, robotic art is getting more widespread than ever before. In this design research, we have designed and implemented a novel robotic art framework, in which human dancers and drones can collaboratively create visualization and sonification in the immersive virtual environment. This dynamic, intermodal interaction will open up a new potential for novel aesthetic dimensions.

Drone and Myo

TweetSonification

Real-Time Tweet Sonification for Remembrance of the Sewol Ferry

There was a tragic accident of sinking a ferry in Korea in April 16, 2014. We aim to remember this tragedy together to support people who lost their family in the accident and let more people know about it for prompt resolution of this matter. Artists and scholars have tried to show their support for family by performances and writings. In the same line, we have created a real-time tweet sonification program using “#416”, the date of the tragedy. The text of tweets including #416 is translated into the Morse code and sonified in real-time.

Auditory Display Design for Electronic Devices

To add a novel value (i.e., value-added design) to the electronic devices (e.g., home appliances, mobile devices, in-vehicle infotainment, etc.), we have designed various auditory displays (e.g., auditory icons, earcons, spearcons, spindexes, etc.). The design process involves not only cognitive mappings (from the human factors perspectives), but also affective mappings (from the aesthetics and pleasantness perspectives). We have also developed new sound-assessment methodologies such as the Sound Card Sorting and Sound-Map Positioning. Current projects include the design of Auditory Emoticons and Lyricons (Lyrics + Earcons). Based on these diverse experiences, we will continue to conduct research on this auditory display designs to create more fun and engaging products as well as effective and efficient products.

Auditory Menu

Automotive User Interfaces & Distracted Driving

Driving 3

Driving Performance Data-based Sonification

In addition to the traditional collision warning sounds and voice and beeps for personal navigation devices, we have devised more dynamic in-vehicle sonification systems. For example, we design fuel efficiency sonification based on real-time driving performance data. We have developed software that can extract all the driving performance data (speed, lane deviation, torque, steering wheel angle, pedal pressure, crash, etc.) from our simulator. All these data can be mapped onto sound parameters. This project can also be extended to a higher level, nearby traffic sonification system based on collective big traffic data. This project is supported by the industry partner.


Sonically-Enhanced In-Vehicle Gesture Interactions

In-vehicle touchscreen displays offer many benefits, but they can also distract drivers. We are exploring the potential of gesture control systems to support or replace potential dangerous touchscreen interactions. We do this by replacing information which we usually acquire visually with auditory displays that are both functional and beautiful. In collaboration with our industry partner, our goal is to create an intuitive, usable interface that improves driver safety and enhances the driver experience. This project was supported by Hyundai Motors Company.

Vehicle Gesture

Facial Detection

In-Car Affect Recognition and Regulation System

The goal of this project is to increase driver safety by taking drivers’ emotions and affect into account, in addition to cognition. To this end, we have implemented a dynamic real-time affect recognition and regulation system. At the same time, in order for the system to accurately detect a driver’s essential emotional state, we have identified a driving-specific emotion taxonomy. Using driving simulators, we have demonstrated that specific emotional states (e.g., anger, fear, happiness, sadness, boredom, etc.) have different impacts on driving performance, risk perception, situation awareness, and perceived workload. For affective state detection, we have used eye-tracking, facial expression recognition, respiration, heart rate (ECG), brain activities (fNIRS), grip strength detection, and smartphone sensors, etc. For regulation part, we have been testing various music pieces (e.g., emotional music, self-selected music), sonification (e.g., real-time sonification based on affect data), and the speech-based systems (e.g., emotion regulation prompt vs. situation awareness prompt). Part of this project is supported by Michigan Tech Transportation Institute and Korea Automobile Testing and Research Institute.


Effects of Music on Emotional Driver Behavior

The goal of this project is to understand driver emotion from comprehensive perspective and help emotional drivers mitigate the effects of emotions on driving performance. Our empirical research has shwon that happy music and self-selected music can help angry drivers drive better. However, self-selected "sad" music might degenerate driving performance. This research also showed the potentials of using fNIRS and ECG to monitor drivers' affective states.

Eye Tracking

Simulated Rail Crossing

Validation of Simulated Driving Behavior by Comparing with Naturalistic Driving Behavior

Investigating driving behavior using a driving simulator is widely accepted in the research community. Recently, railroad researchers have also started conducting rail-crossing research using the driving simulator. Whereas using the simulator has a number of benefits, the validation of the simulated research still remains to be addressed further. To this end, we are conducting research by comparing the simulated driving behavior with the naturalistic driving behavior data. This project is supported by Federal Railroad Administration under US DOT.


Driver Behavior Analysis at Rail Crossings and Multimodal Warning Design

One of the potential approaches to reducing grade crossing accidents is to better understand the effects of warning systems at the grade crossings. To this end, we investigated drivers' behavior patterns (e.g., eye-tracking data and driving performance) with different types of warnings when their car approaches the grade crossings. Then, we have examined the effects of in-vehicle distractors (phone-call, radio, etc.) on their warning perception and behavior change. Based on these preliminary data, we will design better warning systems including in-vehicle visual and auditory warnings. This National University Rail Center project is supported by Michigan DOT and US DOT-OST.

Eye Tracking

Eye Tracking

Multisensory Cue Congruency in Lane Change Test

The Auditory Spatial Stroop experiment investigates whether the location or the meaning of the stimuli more strongly influences performance when they conflict with each other. For example, the word “LEFT” or “RIGHT” is presented in a congruent or incongruent position from its meaning. It can be easily applied to the complex driving environment. For example, the navigation device tells you to turn right, but the collision avoidance system warns you that a hazard is coming from right at the same time. How should we respond to this conflicting situation? To explore this problem space further, we conduct Auditory Spatial Stroop research using OpenDS Lane Change Test to investigate how driving behavior varies under different multimodal cue combinations (visual, verbal, & non-verbal; temporally, spatially, semantically congruent & incongruent).

Multimodal Displays for Hand-over/Take-over in Automated Vehicles

Vehicle automation is becoming more widespread. As automation increases, new opportunities and challenges have also emerged. Among various new design problems, we aim to address new opportunities and directions of auditory interactions in highly automated vehicles to provide better driver user experience and to secure road safety. Specifically, we are designing and evaluating multimodal displays for hand-over/take-over procedure. In this project, we collaborate with Kookmin University and Stanford University. This project is supported by Korea Automobile Testing and Research Institute.

Automated Vehicles

Accessible Computing & Health Technology

Robot Students

Music-based Interactive Robot Design for Children with ASD

People with autistic spectrum disorders are known to have three major issues - impairment of social relationships, social communication, and imaginative thought. The first two problems have a common element, "social interaction with people". Literature has shown that people with autism are likely to interact with computers and animals rather than humans because they are basically simpler than humans. Recent research also shows that using interactive robots might facilitate their social interaction. In this line, we have adopted a small iOS-based interactive robot, "Romo" as our research platform and developed an emotional interaction platform for children with autism. Based on our facial expression detection (client) and sonification server systems, we are creating a social interaction game. In addition, we use multiple Kinects in the room and oversee the overall interaction scene (e.g., distance, turn-taking, chasing, etc.). This project is supported by NIH (National Institutes of Health) via the NRI (National Robotics Initiative) program.


Making Live Theatre with Multiple Robots as Actors

The trend to integrate art and science is pervasive in formal education. STEM education is evolving into STEAM (Science, Technology, Engineering, Art, and Math) movement by adding art and design to the equation. We have tried to develop STEAM education programs, specifically, for underrepresented students in a rural area, including female students, students from low income families, and students with various disabilities. Fairly recently, we have been developing a new afterschool program with a local elementary school entitled, "Making Live Theatre with Multiple Robots as Actors". In this program, students are expected to learn and ponder about (1) technology and engineering (e.g., computational thinking); (2) art and design (e.g., writing, stage designing, preparing music); (3) collaboration (e.g., discussing, role allocating); and (4) co-existence with robots (e.g., philosophical and ethical questions about the roles and limitations of robots).

Robot Theatre

Blindfold

Indoor Way Finding for Visually Impaired People

Many outdoor navigation systems for visually impaired people are already out there. However, a few attempts have been made to enhance their indoor navigation. Think about your blind friends when they attend the conference and stay at a hotel. They might not be familiar with all the layout of the new room and the entire hotel. We have interviewed visually impaired people and identified current problems and plausible issues. Based on that, we have designed and developed indoor navigation system, called "Personal Radar" using an ultrasonic belt (as an object detection technology) and tactile and auditory feedback (as a display technology). For the next version, we are looking at the application of a new lidar system.

Musical Exercise for People with Visual Impairments

Performing independent physical exercise is critical to maintain one's good health. However, it is hard specifically for people with visual impairments to do exercise without proper guidance. To address this problem, we have developed a Musical Exercise platform using Microsoft Kinect for people with visual impairments. With the help of audio feedback of Musical Exercise, people with visual impairments can perform exercises in a good form consistently. Our empirical assessment shows that our system is a usable exercise assistance system. The results also confirm that a specific sound design (i.e., discrete) is better than the other sound or no sound at all.

MusicalExercise
SmartExerciseApp

Sonically-Enhanced Smart Exercise Application Using A Wearable Sensor

The Smart Exercise application is an Android application paired with a wearable Bluetooth IMU sensor that is designed to provide real-time auditory and visual feedback on users’ body motion, while simultaneously collecting kinematic data on their performance. The application aims to help users improve physical and cognitive function, improve users’ motivation to exercise, and to give researchers and physical therapists access to accurate data on their participants’ or patients’ performance and completion of their exercises without the need for expensive additional hardware.

Novel Interfaces & Interactions

Robot Students

Brain-Computer Interfaces & Cognitive Neuroergonomics

Brain-computer interfaces are one of the hot topics in novel interaction design. We test an EEG (EPOC) device to identify whether we can effectively and efficiently use it as a monitoring and controlling tool. Brain waves can be one of the many signals to provide information about users' states (e.g., arousal, fatigue, workload, or emotions). Moreover, we can have people (those with disabilities or their hands occupied) control interfaces using an EEG device. We also have a plan to use it as a composing tool. In the iISoP platform, we will improvise harmonized music based on our body (based on movement tracking data) and mind (based on brainwave data). In addition, we investigate users' cognitive and affective states using fNIRS (functional Near-infrared spectroscopy), EEG, ECG, and EMG when they conduct a novel task (e.g., novel interfaces), an emotional task (e.g., seeing or hearing emotional stimuli), or dual tasks. Based on these experiments, we would also identify the relationship between cognitive processes and affective processes.


Exploring Next Generation IN-vehicle INterfaces

Based on well-formulated methodologies in psychology and human factors and professional design experience of automotive user interfaces, we further explore the next generation in-vehicle interfaces and services in terms of driving performance, safety, and user (driver and passengers) experiences. For example, we investigate the possibility of the use of subliminal cues (e.g., faint light, soft sounds, scent, tactile feedback, etc.) for drivers. To more actively explore the next generation in-vehicle interfaces, we have hosted a series of workshops (e.g., AutomotiveUI, UBICOMP, ICAD, Persuasive Tech, etc.) and edited journal special issues (e.g., Pervasive and Mobile Computing and MIT Presence: Teleoperators and Virtual Environments). This project is supported by Michigan Tech Transportation Institute Initiative Funding.

Robot Theatre