About Me

I am currently a Masters student studying Computer Science at UMass Amherst. I’m particularly interested the field of Machine Learning and Natural Language Processing. I spent the summer of 2016 working at Optum where I applied machine learning to the healthcare domain and the last two summers doing research with Professor Andrew McCallum in his lab IESL doing work related to expertise modeling and paper-review assignment for CS conferences. For the summer of 2019, I’ll be interning at Signify (Philips Lighting) where I’ll be applying Deep Learning and Computer Vision to the lighting domain. In the Fall of 2019, I’ll be working at Amazon Alexa as a Research Scientist where I’ll be working on NLU applications.

Experience

  • Applied Research Scientist at Amazon Alexa [Sep 2019 - Dec 2019]
  • Deep Learning Intern at Signify (Philips Lighting) [June 2019 - Present]
  • Web Developer for UMass IT [Jan 2019 - Present]
  • Undergrad Research Assistant at IESL [September 2016 - January 2018]
  • Software Engineer Intern at Optum [June - August 2016]

Projects

German-English Neural Machine Translation

Abstract: Our goal was to build a Neural Machine Translation model to translate from German to English. Namely, we aim to retrieve coherent English translations from German source sentences using a parallel corpus of German-English sentence pairs as our primary data source. This task is of paramount importance in today’s world, in the fields of academia and industry alike. The motivation of our project is to overcome the language barrier and eventually improve communication channels for people worldwide. We experiment with Attentional GRU and a base Transformer model inorder to build NMT systems and also experiment with a data boosting technique. All of our experiments and results are described in our paper.

Super Resolution on Low Quality Facial Images

Abstract: Reconstruction of obfuscated images is the problem of transforming a low-definition image into a higher definition image. More specifically, facial reconstruction from obfuscated images is transforming a blurry picture of a person’s face into a unambiguous depiction of said person’s face. We use different architectures of Convolutional Neural Networks inorder to tackle this problem. We apply our models to the Labeled Faces in the Wild dataset by obfuscating thefaces using a Gaussian blur. We evaluate our models using the PNSR metric. We also explore different objective functions such as pixel loss, perceptual loss, and a weighted combination of pixel and perceptual loss. Our experiments demonstrate that models using pixel loss generate the highest PSNR values while models using perceptual loss generate the most aesthetically pleasing reconstructions. Our qualitative results show that our models produce recognizable faces from blurred faces. All our models, experiments and results are described in our paper

Modeling Affect Intensity in Tweets

Abstract: Sentiment or affect detection is a problem that has long been under consideration. However, a novel problem arises with affect intensity detection, in which we already know the affect corresponding to the document, and want to predict the real-valued intensity of that affect felt by the author. Particularly, our domain of interest is tweets. We tackle this problem by constructing multiple feature representations of the tweet data, ranging from sparse features such as bag-of-words to distributional representations such as GloVe. In addition, we create several regression models ranging from support vector regression to deep neural networks. Our best result is given by a deep neural network, trained on a combination of sentiment lexicon features and GloVe embeddings, achieving a Pearson correlation coefficient of 0.681. Our results successfuly outperform the Pearson correlation coefficient described in the original task paper. Our experiments and results are described in our paper

Predictive Analysis on Real-Time Machine Performance Data

Our app is a platform for predictive analysis on real-time machine performance data from Black & Decker. Hence, its primary purpose is to anticipate whether a machine would fail based on the given data. A secondary objective was to build an IoT dashboard to visualize results of exploratory data analysis (e.g. summary statistics).

Uptodayte

Uptodayte is a news aggregation and visualization tool that collects relevant news articles that took place within the last 24 hours. Our robust web application continuously scrapes the News API and implements our robust machine algorithm to identify the top headlines of the day. Hovering over a location marker will reveal a tooltip containing the headline of the corresponding news article. Clicking on a marker will open the news article about the story or event that took place at that location marker. Uptodayte was designed with usability and user intuition in mind. We believe that visualizing data through its properties leads to a far better understanding of the overall picture than a crude listing, and this is exactly the vision behind Uptodayte. For these reasons, we believe Uptodayte is usable right out of the box without instructions.

Presentations/Notes

I gave a talk at Signify (Philips Lighting) on Text Augmented Image Classification. Check out the slides here. Also check out the final presentation I gave about the work I did during the summer @ Signify.

I like to keep a set of notes of things that I learn here. Check them out!

Other Fun Stuff

In my spare time, I enjoy playing basketball and football and performing outdoor activities such as hiking & biking, traveling the world and learning new things from coursera.