Jayjun Lee

I am an incoming MS student at the University of Michigan.

Previously, I received my bachelor's degree in Electronic and Information Engineering at Imperial College London, where I worked in the Manipulation and Touch Lab under the guidance of Prof. Ad Spiers.

Email  /  Github

profile photo
Research

I am broadly interested in Robot Learning, Self-Supervised Learning, and Multi-Modal Perception:

1) Robot Learning for general-purpose embodied agents to reason by leveraging multi-modal signals to interact with our real-world, objects, and humans.

2) Self-Supervised Learning for learning abstract, internal representations (e.g. inferring intentions) that can be used for downstream tasks.

3) Multi-Modal Perception for scene understanding and gathering information for robust and adaptable robotic decision-making.

Publications
Naturalistic Robot Arm Trajectory Generation via Representation Learning
Jayjun Lee and Adam J. Spiers
Accepted as poster presentation @ TAROS 2023.
paper video poster

We proposed an autoregressive spatio-temporal graph neural network to learn the internal motion dynamics from action-free graph-structured spatial trajectories of human drinking demonstrations, reconstructed by a self-devised 7-DoF wearable inertial motion capture, as a neural path planner to generate human-like robot arm drinking motion for assistive robotic manipulators.

Other Projects

Other projects that are aligned with my research interests.

Simulation-based Robot Learning using RLBench with UR5e

Modified RLBench to support UR5e robot arm for simulation-based robot learning on the task of robot drinking. Behavioural cloning and end-to-end visuomotor learning are implemented.

Open Manipulator X Robot Manipulation

Modelling of Open Manipulator X robot arm using DH notation and derivation of the inverse kinematics to simulate the robot control, leading to controlling the robot to complete real-life manipulative tasks.

Resource-constrained Obstacle Detector for Rover

Designing learning-based generalizable object detector on resource-constrained hardware as the vision module for the Mars Rover to replace rule-based traditional computer vision module. (top: hardcoded computer vision, bottom: learning-based perception)

Autonomous Mars Rover

Autonomous rover that can navigate in an unknown remote terrain of ball obstacles with the aim of detecting and mapping the obstacles.


Design and source code from Jon Barron's website