Nina Moorman

Education History

Ph.D.

August 2021 - Present (anticipated May 2026)I am a Ph.D. Candidate in Computer Science at the Georgia Institute of Technology with a focus on interactive robot learning and care robotics. 

Bachelor's

August 2016 - May 2021 I graduated from the Georgia Institute of Technology with a bachelor's in Computer Science, focusing on system intelligence and theory.

Research Experience

Graduate Research Assistant 

August 2021 - Present, Georgia Institute of Technology
Multi-Agent Coordination
  • Employed graph neural networks (GNN) for team coordination strategies among decentralized agents in a partially-observable setting.

Multi-Task Learning
  • Developed an attention-based approach to language-conditioned multi-task reinforcement learning (RL), using language models to convert goal specifications to semantically meaningful embeddings for learning agents.

Interactive Robot Learning from Suboptimal and Heterogeneous Demonstrators
  • Characterized end-user attitudes towards co-located embodied care robots that learn.
  • Developed a heterogeneous inverse reinforcement learning (IRL), meta-learning a mapping from sub-optimal and heterogeneous human feedback to optimal labels.
  • Investigated non-experts’ ability to provide demonstrations to robots zero-shot in novel domains.

Undergraduate Research Assistant

May 2019 - August 2021, Georgia Institute of Technology
Graph Theory
  • Tightened the bounds of the smallest eigenvalue of the Laplacian of Cayley Graphs, thus bounding the bipartiteness of this class of graphs.

Computer Vision
  • Performed tennis ball localization using EKF sensor fusion for robotic wheelchair tennis.
  • Performed pose estimation to study gene expression (via the mating ritual) of selectively bred fish.
  • Investigated the neuro-mechanics of moth flight by analyzing individual wing muscle physical and electrical activity.

Industry Experience

Research Intern

May - August 2023, Honda Research Institute
  • Currently filing a patent on employing geometric deep learning to model interpersonal dynamics.
  • Developed and implemented human-human-robot interaction algorithms that have a positive effect on the interpersonal dynamics in the shared space.

Software Systems Engineer Intern

August 2019 - July 2020, Georgia Tech Research Institute (GTRI)
  • Employed techniques in natural language processing (NLP) and computer vision (CV) for flight data processing automation, deployed through Apache Airflow.
  • Implemented a web-based solution to data-processing architecture using SQLite3.

Leadership Experience

Workshop Organizer

Vice President of RoboWomen

May 2023 - Present

Secretary of  TechMasters Club

June 2020 - December 2020

Teaching Experience

Head Teaching Assistant

Summer 2019; Fall 2023

Group Leader

August - October 2023

Research Mentor

Publications

Journal Publications


Athletic Mobile Manipulator System for Robotic Wheelchair TennisZulfiqar Zaidi*, Daniel Martin*, Nathaniel Belles, Viacheslav Zakharov, Arjun Krishna, Kin Man Lee,Peter Wagstaff, Sumedh Naik, Matthew Sklar, Sugju Choi, Yoshiki Kakehi, Ruturaj Patil,Divya Mallemadugula, Florian Pesce, Peter Wilson, Wendell Hom, Matan Diamond, Bryan Zhao,Nina Moorman, Rohan Paleja, Letian Chen, Esmaeil Seraj, and Matthew GombolayRA-L 2023, presented at IROS 2023
In this paper, we propose the first opensource, autonomous robot for playing regulation wheelchair tennis. We demonstrate the performance of our full-stack system in executing ground strokes and evaluate each of the system’s hardware and software components. The goal of this paper is to (1) inspire more research in human-scale robot athletics and (2) establish the first baseline towards developing a robot in future work that can serve as a teammate for mixed, human-robot doubles play. Our paper contributes to the science of systems design and poses a set of key challenges for the robotics community to address in striving towards a vision of human-robot collaboration in sports.

LanCon-Learn: Learning With Language to Enable Generalization in Multi-Task ManipulationAndrew Silva, Nina Moorman; William Silva; Zulfiqar Zaidi; Nakul Gopalan; Matthew GombolayRA-L 2022, presented at ICRA 2022
We present LanCon-Learn, a novel attention-based approach to language-conditioned multi-task learning in manipulation domains to enable learning agents to reason about relationships between skills and task objectives through natural language and interaction. We evaluate LanCon-Learn for both reinforcement learning and imitation learning, across multiple virtual robot domains along with a demonstration on a physical robot. LanCon-Learn achieves up to a 200% improvement in zero-shot task success rate and transfers known skills to novel tasks faster than non-language-based baselines, demonstrating the utility of language for goal specification.

On the Bipartiteness Constant and Expansion of Cayley graphsNina Moorman, Peter Ralli, Prasad TetaliEJC 2020
Let G be a finite, undirected, d-regular graph and A(G) its normalized adjacency matrix, with eigenvalues 1 = λ1(A) ≥ · · · ≥ λn ≥−1. It is a classical fact that λn = −1 if and only if G is bipartite. Our main result provides a quantitative separation of λn from−1 in the case of Cayley graphs, in terms of their expansion. Denoting hout by the (outer boundary) vertex expansion of G, we show that if G is a non-bipartite Cayley graph (constructed using a group and a symmetric generating set of size d) then λn ≥ −1 + ch^2out /d^2 , for c an absolute constant. We exhibit graphs for which this result is tight up to a factor depending on d. This improves upon a recent result by Biswas and Saha (2021) who showed λn ≥ −1 + h^4 out /(2^9 d^8) . We also note that such a result could not be true for general non-bipartite graphs.

Conference Publications


Investigating the Impact of Experience on a User's Ability to Perform Hierarchical AbstractionN. Moorman, N. Gopalan, A. Singh, E. Hedlund-Botti, M. Schrum, S. Yang, L. Seelam, M. GombolayRSS 2023; Best Student Paper Award Finalist
The field of Learning from Demonstration enables end-users, who are not robotics experts, to shape robot behavior. However, using human demonstrations to teach robots to solve long-horizon or multi-modal problems by leveraging the hierarchical structure of the task is still an unsolved problem. Prior work has yet to show that human users can provide sufficient demonstrations in novel domains without showing the demonstrators explicit teaching strategies for each domain. In this work, we investigate whether non-expert demonstrators can generalize robot teaching strategies to provide necessary and sufficient demonstrations to robots zero-shot in novel domains. We find that increasing participant experience with providing demonstrations improves their demonstration's degree of sub-task abstraction (p<.001), teaching efficiency (p<.001), and sub-task redundancy (p=.046) in novel domains allowing generalization in robot teaching. Our findings demonstrate for the first time that non-expert demonstrators can transfer experience from a series of training experiences to provide high-quality demonstrations when programming robots to complete task and motion planning problems on novel domains without the need for explicit instruction. 
Impacts of Robot Learning on User Attitude and BehaviorN. Moorman, E. Hedlund-Botti, M. Schrum, M. Natarajan, M. Gombolay HRI 2023
We investigate the impacts on end-users of in situ robot learning through a series of human-subjects experiments. We examine how different learning methods influence both in-person and remote participants’ perceptions of the robot. While we find that the degree of user involvement in the robot’s learning method impacts perceived anthropomorphism (p = .001), we find that it is the participants’ perceived success of the robot that impacts the participants’ trust in (p < .001) and perceived usability of the robot (p < .001) rather than the robot’s learning method. Therefore, when presenting robot learning, the performance of the learning method appears more important than the degree of user involvement in the learning. Furthermore, we find that the physical presence of the robot impacts perceived safety (p < .001), trust (p < .001), and usability (p < .014). Thus, for tabletop manipulation tasks, researchers should consider the impact of physical presence on experiment participants.

Negative Result for Learning from Demonstration: Challenges for End-Users Teaching Robots with Task And Motion Planning AbstractionsNakul Gopalan, Nina Moorman, Manisha Natarajan, Matthew Gombolay RSS 2022
Prior works have not examined whether non-roboticist endusers are capable of providing such hierarchical demonstrations without explicit training from a roboticist showing how to teach each task. To address the limitations and assumptions of prior work, we conduct two novel human-subjects experiments to answer (1) what are the necessary conditions to teach users through hierarchy and task abstractions and (2) what instructional information or feedback is required to support users to learn to program robots effectively to solve novel tasks. Our first experiment shows that fewer than half (35.71%) of our subjects provide demonstrations with sub-task abstractions when not primed. Our second experiment demonstrates that users fail to teach the robot correctly when not shown a video demonstration of an expert’s teaching strategy for the exact task that the subject is training. Not even showing the video of an analogue task was sufficient. These experiments reveal the need for fundamentally different approaches in LfD which can allow end-users to teach generalizable long-horizon tasks to robots without the need to be coached by experts at every step.

MIND MELD: Personalized Meta-Learning for Robot-Centric Imitation LearningMariah L. Schrum, Erin Hedlund-Botti, Nina Moorman, Matthew C. GombolayHRI 2022; Best Technical Paper Award
To create a more human-aware version of robot-centric LfD, we present Mutual Information-driven Meta-learning from Demonstration (MIND MELD). MIND MELD meta-learns a mapping from suboptimal and heterogeneous human feedback to optimal labels, thereby improving the learning signal for robot-centric LfD. The key to our approach is learning an informative personalized embedding using mutual information maximization via variational inference. The embedding then informs a mapping from human provided labels to optimal labels. We evaluate our framework in a human-subjects experiment, demonstrating that our approach improves corrective labels provided by human demonstrators. Our framework outperforms baselines in terms of ability to reach the goal (p < .001), average distance from the goal (p = .006), and various subjective ratings (p = .008).

Effects of Social Factors and Team Dynamics on Adoption of Collaborative Robot AutonomyMariah L. Schrum, Glen Neville, Michael Johnson, Nina Moorman, Rohan Paleja, Karen M. Feigh, Matthew C. GombolayHRI 2021
In an analog manufacturing environment, we explore how these various factors influence an individual's willingness to work with a robot over a human co-worker in a collaborative Lego building task. We specifically explore how this willingness is affected by: 1) the level of social rapport established between the individual and his or her human co-worker, 2) the anthropomorphic qualities of the robot, and 3) factors including trust, fluency and personality traits. Our results show that a participant's willingness to work with automation decreased due to lower perceived team fluency (p=0.045), rapport established between a participant and their co-worker (p=0.003), the gender of the participant being male (p=0.041), and a higher inherent trust in people (p=0.018).

Poster Sessions, Presentations, and Invited Talks

Awards

Reviewing Experience

Conference on Robot Learning (CoRL) | 2023

IEEE International Conference on Robotics and Automation (ICRA) | 2024

Robotics: Science and Systems (RSS) | 2022, 2024

AAAI Conference on Artificial Intelligence (AAAI) | 2022