About Me

I'm a graduate student at UC San Diego specializing in Artificial Intelligence, with hands-on experience developing RAG systems at NASA and building high-impact ML models in the healthcare industry. My work spans computer vision, like real-time robotic tracking, and NLP with Transformers and LLMs. I'm passionate about applying state-of-the-art research to solve complex, real-world challenges and shape a future where technology improves our health, relationships, and understanding of the world.

Career Path

AI Researcher Intern, NASA (June 2025 - Sept 2025)

At NASA's Ames Research Center, I architected a specialized chatbot using a LightRAG framework to query complex aerospace data. My work involved engineering a custom time-based knowledge graph to improve the RAG model's performance and precision.

Graduate Student Researcher, UCSD (March 2025 - June 2025)

In the UCSD Cognitive Robotics Laboratory, I developed and deployed a real-time, multi-modal person-tracking system for a Boston Dynamics robot, enabling it to autonomously follow users with high accuracy by fusing RGB-D and skeletal data.

Master of Science in Computer Science (Sept 2024 - Present)

I’m currently pursuing a Master’s in CS with a focus on Artificial Intelligence at UC San Diego, where I delve into exciting AI theory and applications.

Data Scientist (May 2022 - June 2024)

At UnitedHealth Group, our team developed AI models to prevent adverse health outcomes by predicting them before they happened. My role involved leading data analysis, data collection, model development, and model deployment for multiple projects. This also included creating pipelines and parallelizing GIS data collection, which empowered us to integrate critical insights into machine learning models and predictive analytics.

Bachelor of Science in Computer Science (Sept 2018 - May 2022)

I graduated summa cum laude from Gustavus Adolphus College in 2022 with a 3.96 GPA, where I laid the groundwork for my career in tech.

Projects

BERTopic Podcast Topic Modeling

Analyzing Changing Trends of Podcasts Over Time Using BERTopic

Applied BERTopic to extract and analyze key topics from podcast transcripts, processing over 3,000 YouTube transcripts with enhanced topic labels using GPT-4o, achieving a classification rate of 78.56% for news podcasts.

Semantic Segmentation of Images

Semantic Segmentation of Images Using Deep Learning

Developed multiple CNN architectures for semantic segmentation on the PASCAL VOC-2012 dataset, achieving 83.3% pixel accuracy with a pretrained ResNet-based model, significantly improving on baseline performance.

Graph and Time Series Query

Natural Language Queries for Graph and Time Series Databases

Developed a natural language query interface to interact with a digital twin, using English to query both Neo4j graph databases for structural data and InfluxDB time-series databases for sensor data.

Deep Q-Learning Maze Traversal

Deep Q-Learning Variations for Maze Traversal

Implemented and compared multiple Deep Q-Learning architectures for maze navigation, including CNNs, Masked Q-Learning, and Dueling DQN, achieving optimal performance with a hyperparameter-tuned network reaching 100% win rate and minimal loss on complex mazes.

Image Classification with Neural Network

Image Classification with Low-Level Neural Network

Implemented a neural network from scratch using NumPy to classify clothing images, experimenting with different activation functions and regularization techniques to achieve 88% test accuracy.

Machine Learning Engineer Nanodegree

Predictive Health Assessment Model

Used Azure Machine Learning Studio to train and hyperparameter-tune a model to predict individuals' general health from survey data, deployed as a scalable API for quick and secure health assessments.

Expectation Maximization on Movie Reviews

Expectation Maximization on Movie Reviews

Classified individuals into 4 types of movie watchers to predict future reviews, accomplished through 256 iterations of Expectation Maximization on a dataset of movie reviews.

N-Gram Log Likelihood

N-Gram Log Likelihood

Built a program to compute unigram and bigram log likelihoods of character sequences, tokenizing by word and combining models to compute sentence log likelihood.

Transformer Architecture

Transformer Encoder and Decoder Optimization

Implemented and analyzed attention mechanisms in a transformer, optimizing encoder and decoder patterns and evaluating different positional embedding strategies, improving classification accuracy from 33.6% to 85.86%.

RNN and LSTM Shakespearean Text Generation

RNN and LSTM Shakespearean Text Generation

Implemented and compared Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) models for character-level text generation using Shakespeare's works.

Contact