Resume
Mission Statement
To make abstract information more tangible for everyday people and improve lives. I have a special interest in:
Machine Learning
Robotics
Data Visualization
What I do at Alphabet (2017 - Present)
As of January 2023 I am a Machine Learning engineer on the Perception team at Waymo. I improve the ability of our models to understand the semantics of construction zones. Previously I worked on the Behavior Prediction team, where we we predict the actions of cars, cyclists, pedestrians etc. in order to safely share the road. Some highlights from my time at Waymo so far:
Foundational model improvements via improvements to the loss functions and model architecture.
Improvements to the statistical methods of evaluating model performance to enable better iteration quality for the team.
Designing and performing ablation tests for the model to reduce the set of inputs and overall model complexity.
Previously I was a Machine Learning engineer within the Dermatology team of Google Health. My job responsibilities included
Model lead for the premier DermAssist product - Identifying the most promising research and shepherding it into our commercial product.
Developing new model infrastructure to enable continual model updates and deployment
Developing new performance metrics for models that produce differential diagnoses
Onboarding new team members
Some of the accomplishments I was most proud of were:
Creating the majority of models included in the product's classification ensemble
Leading a poject for distilling the model ensemble to reduce its overall resource consumption.
Before that I worked within Google Brain, and prior to that I worked on a ranking team within Google Cloud. Some highlights:
DermAssist; Full stack mobile app that also utilizes cutting edge computer vision models: article in The Verge
Frontend TL for the CE Mark approved product
Trained majority of models in the DermAssist model ensemble
TF Blog post: How DermAssist uses TensorFlow.js for on-device image quality checks
Post Market Monitoring solution for tracking actual performance of our model in the wild
Full-featured labeling data analysis library, including estimations of labeler performance and ground truth.
Full IDE experience for developers to make labeling tasks
Clinical decision support system for healthcare professionals
Paper: Quality Control Challenges in Crowdsourcing Medical Labeling
Identifying uniqueness of worker behavior with Deep Learning (specifically RNN's)
I have also worked with teams that utilize applied ML at Google. My best projects:
Application of Convolutional Neural Networks to determining the placement of Endotracheal and Nasogastric tubes.
Application of the T5 model to helping job applicants formulate interview responses.
Core Skills
Computer Languages: Proficient in Python, C++, Java, Typescript, Javascript, and SQL.
Machine Learning: Proficient with PyTorch, Keras, Tensorflow.
Ongoing Education and Interests
Ongoing Education:
In Autumn of 2021, I started as a part time Master's student at Stanford University. I've taken the following courses so far:
CS330 - Meta Learning
Created a bird song classifier to apply to a novel dataset provided by Andreas Paepcke.
CS329T - Trustworthy Machine Learning
Created a successful method for making a black box attack on text summarization models with less data.
AA274B - Principles of Robot Autonomy II
AA274A - Principles of Robot Autonomy I
Made a robot that simultaneously localized and mapped an environment, the objects in that environment, and executed a mission to retrieve those objects.
Previously, I completed the Artificial Intelligence Graduate Certificate through the Stanford Center for Professional Development on December 19. I finished with a 4.0 GPA and a solid understanding of the applied math that goes into ML. In particular, I focused on
CS236 - Deep Generative Models - Stefano Ermon; Aditya Grover
CS224N - Natural Language Processing with Deep Learning - Christopher Manning
Created fake news articles from article summaries using several different NLP models, with an especial focus on GRU's and DCNN's. Paper
CS221 - Artificial Intelligence: Principles and Techniques - Percy Liang
Used existing NLP techniques to assign "left/right" bias labels to news articles. Applied explainability research to identify what parts of articles indicated they were biased.
AA228 - Decision Making Under Uncertainty - Mykel Kochenderfer
Developed track simulator for self-driving model car, drove car around track via a particle filter. Paper
Other Projects:
Putting 3D drone imagery in a VR Headset with ML
Blog post: Using a single camera from a drone and machine learning, we create a 3D view of the drone's flight in virtual reality.
Automatic transcription of screenshots on Reddit
Working with a small model car built for DIY Robocar Races
Github with simulator and path finding implementations
An interesting simulated approach with particle filters (unfortunately this approach did not port well to commodity hardware).
Current approach is simpler. Involves perspective transform, hue based detection of the road lines, and plotting steering arcs based on wheelbase length and steering angle.
Catbot
Created with an:
AdaFruit controller
Raspberry Pi 3
Raspberry Pi Camera
Ultrasonic Sensor
2 OSEPP Electronics tank platform kits
Fun toy that uses YOLONet to spin around in circles looking for a cat. When it finds one, it follows it until the cat is detected by the ultrasonic sensor. Then it waits for the cat's next move before beginning pursuit again.
Depth predictions from drone footage!
Now this is car racing.
Catbot, temporarily reprogrammed to chase water bottles for demo purposes :-)
Previous Experience
Education
2012-2016: University of Texas at Austin, Computer Science
Journalism was my original degree and passion, before transferring to Computer Science in Junior year. My education and experience in Journalism informs some of my interest in connecting technology and information.
Previous Work Experience
2016: Undergraduate Research Assistant at Information School lab
Created MmmTurkey - a framework for collecting worker behavioral data while they complete tasks on Amazon Mechanical Turk. Paper here. The framework was accepted to HCOMP and utilized for future research publications by the same lab.
2016: Fitbit
Created comparative testing of research and production code for classifying hardware data into sleep stages on the Sleep and Wellness team.
2015: Texas Tribune - Digital Media Intern
Developed several visualizations for Texas Tribune articles
Developed an internal tool for journalists to explore existing datastores without needing developers to write scripts
2015: Blackbaud
Helped debug and develop features on a platform for non-profit fundraising
2014-2015: Digital Lead for Daily Texan - Student newspaper