This section shows the work during my Ph.D and Postdoc. To see the latest work, please visit my lab website: https://rirolab.kaist.ac.kr/.
My current researches are focused on developing highly-capable robotic teammates/assistants/collaborators that can assist humans in our environments. For this line of research, I have been working on natural-language driven mobile manipulation, skill learning, safety monitoring, etc. My research areas expand from manipulation control to high-level natural language grounding.
Learning for manipulation is to obtain manipulation skills from a wide range of knowledge sources. We introduce methodologies for learning manipulation constraints and motion parameters from demonstrations.
[1] Shen Li*, Daehyung Park*, Yoonchang Sung*, Julie A. Shah, and Nicholas Roy, "Reactive Task and Motion Planning under Temporal Logic Specifications", IEEE Int'l. Conf. on Robotics and Automation, 2021. (ICRA2021) [PDF][Video] (*- authors contributed equally)[PDF][Video]
[2] Daehyung Park, Michael Noseworthy, Rohan Paul, Subhro Roy, and Nicholas Roy. "Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning", Conference on Robot Learning (CoRL2019) [PDF][Video] (Oral presentation, 5% oral acceptance rate)
[3] Michael Noseworthy, Rohan Paul, Subhro Roy, Daehyung Park, and Nicholas Roy. "Task-Conditioned Variational Autoencoders for Learning Movement Primitives", Conference on Robot Learning (CoRL2019) [PDF] (27.6% Acceptance Rate)
[4] Daehyung Park, Michael Noseworthy, Rohan Paul, Subhro Roy, and Nicholas Roy, "Joint Goal and Constraint Inference using Bayesian Nonparametric Inverse Reinforcement Learning," The 4th Multidisciplinary Conf. on Reinforcement Learning and Decision Making, 2019 [PDF]
Natural language is a convenient means to deliver a user’s high-level instruction. We introduce a language-guided manipulation framework that learns common-sense knowledge from natural language instructions and corresponding motion demonstrations.
[1] T. M. Howard, E. Stump, J. Fink, J. Arkin, R. Paul, D. Park, S. Roy, D. Barber, R. Bendell, K. Schmeckpeper,J. Tian, J. Oh, M. Wigness, L. Quang, B. Rothrock, J. Nash, M. R. Walter, F. Jentsch, N. Roy. "An Intelligence Architecture for Grounded LanguageCommunication with Field Robots," Field Robotics, 2021. [Accepted]
[2] Daehyung Park*, Jacob Arkin*, Subhro Roy, Matthew R. Walter, Nicholas Roy, Thomas M. Howard, and Rohan Paul. "Multi-Modal Estimation and Communication of Latent Semantic Knowledge for Robust Execution of Robot Instructions", The International Journal of Robotics Research (IJRR), 2020. (*- authors contributed equally) [PDF][video]
[3] Subhro Roy, Michael Noseworthy, Rohan Paul, Daehyung Park and Nicholas Roy. "Leveraging Past References for Robust Language Grounding", Conf. on Computational Natural Language Learning (CoNLL 2019) [PDF]
[4] Daniel Nyga, Subhro Roy, Rohan Paul, Daehyung Park, Mihai Pomarlan, Michael Beetz, and Nicholas Roy. "Grounding Robot Plans from Natural Language Instructions with Incomplete World Knowledge", Conf. on Robot Learning (CoRL2018) [PDF][Video] (31% Acceptance Rate)
[5] Jacob Arkin, Rohan Paul, Daehyung Park, Subhro Roy, Nicholas Roy and Thomas M. Howard. "Real-Time Human-Robot Communication for Manipulation Tasks in Partially Observed Environments", Int'l. Symp. on Experimental Robotics (ISER2018) [PDF][Video]
Assistive robots have the potential to serve as caregivers, assisting with activities of daily living (ADLs) and instrumental activities of daily living (IADLs). Detecting when something has gone wrong could help assistive robots operate more safely and effectively around people. However, the complexity of interacting with people and objects in human environments can make errors difficult to detect. I introduce a multimodal execution monitoring system to detect and classify anomalous executions when robots operate near humans. The system’s anomaly detector models multimodal sensory signals with a hidden Markov model (HMM) or an LSTM-VAE. The detector uses a likelihood threshold that varies based on the progress of task execution. The system classifies the type and cause of common anomalies using an artificial neural network. I evaluate my system with haptic, visual, auditory, and kinematic sensing during household tasks and human-robot interactive tasks (feeding assistance) performed by a PR2 robot with able-bodied participants and people with disabilities. In my evaluation, my methods performed better than other methods from the literature, yielding higher area under curve (AUC) and shorter detection delays. Multimodality also improved the performance of monitoring methods by detecting a broader range of anomalies.
[1] Daehyung Park, Yuuna Hoshi, and Charles C. Kemp. “Multimodal Anomaly Detection for Robot-Assisted Feeding Using an LSTM-based Variational Autoencoder”, IEEE Robotics and Automation Letters (RA-L), 2018. [PDF][Video]
[2] Daehyung Park, Hokeun Kim, and Charles C. Kemp. “Multimodal Anomaly Detection for Assistive Robots”, Autonomous Robots, 2018.[PDF]
[3] Daehyung Park, Hokeun Kim, Yuuna Hoshi, Zackory Erickson, Ariel Kapusta, and Charles C. Kemp. “A Multimodal Execution Monitor with Anomaly Classification for Robot-Assisted Feeding”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2017) [PDF] [Video]
[4] Daehyung Park, Zackory Erickson, Tapomayukh Bhattacharjee, and Charles C. Kemp. “Multimodal Execution Monitoring for Anomaly Detection During Robot Manipulation”, IEEE International Conference on Robotics and Automation, 2016. (ICRA2016)[PDF][Video]
[Resources]
General-purpose mobile manipulators have the potential to serve as a versatile form of assistive technology. However, their complexity creates challenges, including the risk of being too difficult to use. We present a proof-of-concept robotic system for assistive feeding that consists of a Willow Garage PR2, a high-level web-based interface, and specialized autonomous behaviors for scooping and feeding yogurt. As a step towards use by people with disabilities, we evaluated our system with 5 able-bodied participants. All 5 successfully ate yogurt using the system and reported high rates of success for the system's autonomous behaviors. Also, Henry Evans, a person with severe quadriplegia, operated the system remotely to feed an able-bodied person. In general, people who operated the system reported that it was easy to use, including Henry. The feeding system also incorporates corrective actions designed to be triggered either autonomously or by the user. In an offline evaluation using data collected with the feeding system, a new version of our multimodal anomaly detection system outperformed prior versions.
[Publications]
[1] Daehyung Park, Yuuna Hoshi, Harshar P. Mahajan, Ho Keun Kim, Zackory Erickson, Wendy A. Rogers, and Charles C. Kemp. “Active Robot-Assisted Feeding with a General-Purpose Mobile Manipulator: Design, Evaluation, and Lessons Learned”, Robotics and Autonomous Systems (RAS), 2019 [PDF][Video]
[2] Ariel Kapusta, Philip Grice, Henry Clever, Yash Chitalia, Daehyung Park, and Charles C. Kemp. “A System for Bedside Assistance that Integrates a Robotic Bed and a Mobile Manipulator” [PDF][Video]
[3] Henry M. Clever, Ariel Kapusta, Daehyung Park, Zackory Erickson, Yash Chitalia, and Charles C. Kemp. “3D Human Pose Estimation on a Configurable Bed from a Pressure Image”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2018). [PDF]
[4] Daehyung Park and Charles C. Kemp, "Multimodal Execution Monitoring for Robot-Assisted Feeding," TechSAge State of the Science Conference, 2017 [PDF]
[5] Ariel Kapusta, Yash Chitalia, Daehyung Park, and Charles C. Kemp. "Collaboration Between a Robotic Bed and a Mobile Manipulator May Improve Physical Assistance for People with Disabilities," IEEE ROMAN workshop on Behavior, Adaptation and Learning for Assistive Robotics" (BAILAR), 2016 [PDF]
[6] Daehyung Park, Youkeun Kim, Zackory Erickson, and Charles C. Kemp. “Towards Assistive Feeding with a General-Purpose Mobile Manipulator”, ICRA2016 workshop on Human-Robot Interfaces for Enhanced Physical Interactions, 2016 [PDF]
In this work, we are focusing on methods for haptic mapping, planning, and control during reaching into the unknown environment.
[Publications]
[1] T. Bhattacharjee, A. A Shenoi, D. Park, J. Rehg, and C. Kemp, "Combining Tactile Sensing and Vision for Rapid Haptic Mapping", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2015) [PDF][Video]
[2] D. Park, A. Kapusta, J. Hawke, and C. Kemp. “Interleaving Planning and Control for Efficient Haptically-guided Reaching in Unknown Environments”, IEEE-RAS International Conference on Humanoid Robots (Humanoids 2014) [PDF][Video]
[3] D. Park, A. Kapusta, Y. Kim, J. Rehg, and C. Kemp. “Learning to Reach into the Unknown: Selecting Initial Conditions When Reaching in Clutter”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2014) [PDF][Video]
[Software]
Tactile sensing plugin for GAZEBO : https://github.com/gt-ros-pkg/gt-meka-sim