Nina Moorman
Education History
Doctor of Philosophy
August 2021 - Present (anticipated May 2026)Qualifying Exam: Passed in May, 2023I am a Ph.D. student in Computer Science at the Georgia Institute of Technology with a focus on human-interactive robot learning.Bachelor of Science
August 2016 - May 2021 I graduated from the Georgia Institute of Technology with a Bachelor of Science in Computer Science, focusing on system intelligence and theory.Research Experience
Graduate Research Assistant
August 2021 - Present, Georgia Institute of TechnologyLanguage Models
- Developing language explanations of robot failure, personalized to a user's situational awareness and AI literacy.
- Developed an attention-based approach to language-conditioned multi-task reinforcement learning (RL), using language models to convert goal specifications to semantically meaningful embeddings for learning agents [RA-L 2022].
Multi-Task Learning
- Developed hierarchical machine learning models to predict the need for and duration of mechanical ventilation, extracorporeal membrane oxygenation, and mortality.
- Employed graph neural networks (GNN) for team coordination strategies among decentralized agents in a partially-observable setting.
- Employed graph neural networks (GNN) to synthesize multi-modal, multi-agent interaction data and predict relational affect.
- Performed tennis ball localization using EKF sensor fusion to enable a Barrett WAM robotic arm mounted on a wheelchair to learn to play tennis [RA-L 2023].
Human-Interactive Robot Learning
- Assessed end-user attitudes towards embodied robots that learn to determine how involved end-users would like to be in in-home robot learning [HRI 2023].
- Developed a novel learning from demonstration (LfD) framework that meta-learns a mapping from sub-optimal and heterogeneous human feedback to optimal labels [HRI 2022].
- Investigated non-experts’ ability to provide demonstrations to robots zero-shot in novel domains [RSS 2023].
Undergraduate Research Assistant
May 2019 - August 2021, Georgia Institute of TechnologyGraph Theory
- Tightened the bounds of the smallest eigenvalue of the Laplacian of Cayley Graphs, thus bounding the bipartiteness of this class of graphs [EJC 2020].
Computer Vision
- Performed tennis ball localization using EKF sensor fusion for robotic wheelchair tennis [RA-L 2023].
- Performed pose estimation to study gene expression (via the mating ritual) of selectively bred fish.
- Investigated the neuro-mechanics of moth flight by analyzing individual wing muscle physical and electrical activity.
Industry Experience
Research Intern
May - August 2023, Honda Research Institute- Currently filing a patent on employing geometric deep learning to model interpersonal dynamics.
- Developed and implemented human-human-robot interaction algorithms that have a positive effect on the interpersonal dynamics in the shared space.
Software Systems Engineer Intern
August 2019 - July 2020, Georgia Tech Research Institute (GTRI)- Employed techniques in natural language processing (NLP) and computer vision (CV) for flight data processing automation, deployed through Apache Airflow.
- Implemented a web-based solution to data-processing architecture using SQLite3.
Leadership Experience
Workshop Organizer
Organized the Human Robot Interaction for Aging in Place workshop at HRI 2023.
Vice President of RoboWomen
May 2023 - PresentExecutive board member of the Robotics Graduate Student Organization (RoboGrads) at Georgia Tech
Organized an inter-institutional Women's Panel and Networking event at Georgia Tech for 50 undergraduate and graduate attendees, with 6 panelists from academia (Georgia Tech, MIT, and University of Michigan), and industry (Toyota Research Institute, and Amazon Labs 126).
Organized a TED Women's Conference event at Georgia Tech with 5 different, concurrent robot demos led by female roboticists and 26 attendees.
Secretary of TechMasters Club
June 2020 - December 2020Executive board member of the local chapter of Toastmasters International at Georgia Tech
Teaching Experience
Head Teaching Assistant
CS 3630: Introduction to Robotics and Perception - Fall 2023, Spring 2024
CS 4400: Introduction to Database Systems - Summer 2019
Group Leader
August - October 2023Instructor for a small-group, peer-led, extended-orientation program for first-semester graduate students as part of GT6000.
Research Mentor
Pablo Alvarez (BS) - Summer 2023
Aman Singh (MS) - Spring 2023
Publications
Journal Publications
Athletic Mobile Manipulator System for Robotic Wheelchair TennisZulfiqar Zaidi*, Daniel Martin*, Nathaniel Belles, Viacheslav Zakharov, Arjun Krishna, Kin Man Lee,Peter Wagstaff, Sumedh Naik, Matthew Sklar, Sugju Choi, Yoshiki Kakehi, Ruturaj Patil,Divya Mallemadugula, Florian Pesce, Peter Wilson, Wendell Hom, Matan Diamond, Bryan Zhao,Nina Moorman, Rohan Paleja, Letian Chen, Esmaeil Seraj, and Matthew GombolayRA-L 2023, presented at IROS 2023
In this paper, we propose the first opensource, autonomous robot for playing regulation wheelchair tennis. We demonstrate the performance of our full-stack system in executing ground strokes and evaluate each of the system’s hardware and software components. The goal of this paper is to (1) inspire more research in human-scale robot athletics and (2) establish the first baseline towards developing a robot in future work that can serve as a teammate for mixed, human-robot doubles play. Our paper contributes to the science of systems design and poses a set of key challenges for the robotics community to address in striving towards a vision of human-robot collaboration in sports.
LanCon-Learn: Learning With Language to Enable Generalization in Multi-Task ManipulationAndrew Silva, Nina Moorman; William Silva; Zulfiqar Zaidi; Nakul Gopalan; Matthew GombolayRA-L 2022, presented at ICRA 2022
We present LanCon-Learn, a novel attention-based approach to language-conditioned multi-task learning in manipulation domains to enable learning agents to reason about relationships between skills and task objectives through natural language and interaction. We evaluate LanCon-Learn for both reinforcement learning and imitation learning, across multiple virtual robot domains along with a demonstration on a physical robot. LanCon-Learn achieves up to a 200% improvement in zero-shot task success rate and transfers known skills to novel tasks faster than non-language-based baselines, demonstrating the utility of language for goal specification.
On the Bipartiteness Constant and Expansion of Cayley graphsNina Moorman, Peter Ralli, Prasad TetaliEJC 2020
Let G be a finite, undirected, d-regular graph and A(G) its normalized adjacency matrix, with eigenvalues 1 = λ1(A) ≥ · · · ≥ λn ≥−1. It is a classical fact that λn = −1 if and only if G is bipartite. Our main result provides a quantitative separation of λn from−1 in the case of Cayley graphs, in terms of their expansion. Denoting hout by the (outer boundary) vertex expansion of G, we show that if G is a non-bipartite Cayley graph (constructed using a group and a symmetric generating set of size d) then λn ≥ −1 + ch^2out /d^2 , for c an absolute constant. We exhibit graphs for which this result is tight up to a factor depending on d. This improves upon a recent result by Biswas and Saha (2021) who showed λn ≥ −1 + h^4 out /(2^9 d^8) . We also note that such a result could not be true for general non-bipartite graphs.
Conference Publications
Investigating the Impact of Experience on a User's Ability to Perform Hierarchical AbstractionN. Moorman, N. Gopalan, A. Singh, E. Hedlund-Botti, M. Schrum, S. Yang, L. Seelam, M. GombolayRSS 2023; Best Student Paper Award Finalist
The field of Learning from Demonstration enables end-users, who are not robotics experts, to shape robot behavior. However, using human demonstrations to teach robots to solve long-horizon or multi-modal problems by leveraging the hierarchical structure of the task is still an unsolved problem. Prior work has yet to show that human users can provide sufficient demonstrations in novel domains without showing the demonstrators explicit teaching strategies for each domain. In this work, we investigate whether non-expert demonstrators can generalize robot teaching strategies to provide necessary and sufficient demonstrations to robots zero-shot in novel domains. We find that increasing participant experience with providing demonstrations improves their demonstration's degree of sub-task abstraction (p<.001), teaching efficiency (p<.001), and sub-task redundancy (p=.046) in novel domains allowing generalization in robot teaching. Our findings demonstrate for the first time that non-expert demonstrators can transfer experience from a series of training experiences to provide high-quality demonstrations when programming robots to complete task and motion planning problems on novel domains without the need for explicit instruction.
Impacts of Robot Learning on User Attitude and BehaviorN. Moorman, E. Hedlund-Botti, M. Schrum, M. Natarajan, M. Gombolay HRI 2023
We investigate the impacts on end-users of in situ robot learning through a series of human-subjects experiments. We examine how different learning methods influence both in-person and remote participants’ perceptions of the robot. While we find that the degree of user involvement in the robot’s learning method impacts perceived anthropomorphism (p = .001), we find that it is the participants’ perceived success of the robot that impacts the participants’ trust in (p < .001) and perceived usability of the robot (p < .001) rather than the robot’s learning method. Therefore, when presenting robot learning, the performance of the learning method appears more important than the degree of user involvement in the learning. Furthermore, we find that the physical presence of the robot impacts perceived safety (p < .001), trust (p < .001), and usability (p < .014). Thus, for tabletop manipulation tasks, researchers should consider the impact of physical presence on experiment participants.
Negative Result for Learning from Demonstration: Challenges for End-Users Teaching Robots with Task And Motion Planning AbstractionsNakul Gopalan, Nina Moorman, Manisha Natarajan, Matthew Gombolay RSS 2022
Prior works have not examined whether non-roboticist endusers are capable of providing such hierarchical demonstrations without explicit training from a roboticist showing how to teach each task. To address the limitations and assumptions of prior work, we conduct two novel human-subjects experiments to answer (1) what are the necessary conditions to teach users through hierarchy and task abstractions and (2) what instructional information or feedback is required to support users to learn to program robots effectively to solve novel tasks. Our first experiment shows that fewer than half (35.71%) of our subjects provide demonstrations with sub-task abstractions when not primed. Our second experiment demonstrates that users fail to teach the robot correctly when not shown a video demonstration of an expert’s teaching strategy for the exact task that the subject is training. Not even showing the video of an analogue task was sufficient. These experiments reveal the need for fundamentally different approaches in LfD which can allow end-users to teach generalizable long-horizon tasks to robots without the need to be coached by experts at every step.
MIND MELD: Personalized Meta-Learning for Robot-Centric Imitation LearningMariah L. Schrum, Erin Hedlund-Botti, Nina Moorman, Matthew C. GombolayHRI 2022; Best Technical Paper Award
To create a more human-aware version of robot-centric LfD, we present Mutual Information-driven Meta-learning from Demonstration (MIND MELD). MIND MELD meta-learns a mapping from suboptimal and heterogeneous human feedback to optimal labels, thereby improving the learning signal for robot-centric LfD. The key to our approach is learning an informative personalized embedding using mutual information maximization via variational inference. The embedding then informs a mapping from human provided labels to optimal labels. We evaluate our framework in a human-subjects experiment, demonstrating that our approach improves corrective labels provided by human demonstrators. Our framework outperforms baselines in terms of ability to reach the goal (p < .001), average distance from the goal (p = .006), and various subjective ratings (p = .008).
Effects of Social Factors and Team Dynamics on Adoption of Collaborative Robot AutonomyMariah L. Schrum, Glen Neville, Michael Johnson, Nina Moorman, Rohan Paleja, Karen M. Feigh, Matthew C. GombolayHRI 2021
In an analog manufacturing environment, we explore how these various factors influence an individual's willingness to work with a robot over a human co-worker in a collaborative Lego building task. We specifically explore how this willingness is affected by: 1) the level of social rapport established between the individual and his or her human co-worker, 2) the anthropomorphic qualities of the robot, and 3) factors including trust, fluency and personality traits. Our results show that a participant's willingness to work with automation decreased due to lower perceived team fluency (p=0.045), rapport established between a participant and their co-worker (p=0.003), the gender of the participant being male (p=0.041), and a higher inherent trust in people (p=0.018).
Poster Sessions, Presentations, and Invited Talks
Talk on Organizing a Workshop at the AI-CARING Student Symposium at UMass Lowell 04/2024
Robotics: Sciences and Systems (RSS) 2023 07/2023
Workshop on Learning for Task and Motion Planning at RSS 2023 07/2023
AI-CARING National Artificial Intelligence Research Institute Research Symposium at CMU 03/2023
ACM/IEEE International Conference on Human-Robot Interaction (HRI 2023) 03/2023 (Video, Slides)
Charlie and Harriet Shaffer Cognitive Empowerment Program (CEP) Research Symposium 02/2023
Institute for Robotics and Intelligent Machines Robotics Days for Industry 11/2022
AAAI Fall Symposia Series on Artificial Intelligence for Human-Robot Interaction 11/2022
International Conference on Robotics and Automation (ICRA 2022) 05/22
AI-CARING National Artificial Intelligence Research Institute Research Symposium at GaTech 04/2022
Machine Learning in Human-Robot Collaboration: Bridging the Gap Workshop at HRI 2022 03/2022
Awards
RSS 2023 | Best Student Paper Award Finalist
HRI 2022 | Best Technical Paper Award
2020 | President’s Undergraduate Research Award $1500
2016-2021 | Zell Miller Scholarship $50,000
2016-2021 | Georgia Tech's Honors Program
Reviewing Experience
AAAI Conference on Artificial Intelligence (AAAI)
Conference on Robot Learning (CoRL)
IEEE International Conference on Robotics and Automation (ICRA)
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Robotics: Science and Systems (RSS)