# Arjun Rao

I am a final-year undergraduate student at The Chinese University of Hong Kong majoring in Financial Technology and Data Analytics.

My reseach interests broadly encompass optimization for machine learning. I currently work on Decentralized Machine Learning at CUHK's Network Science and Optimization Laboratory with Professor Hoi-To Wai

I previously interned at NASA Jet Propulsion Laboratory's Machine Learning and Instrument Autonomy Group, where I worked on improving robustness of machine learning models on-board large-scale airborne and spaceborne imaging spectrometers advised by Andrew Thorpe and Steffen Mauceri.

My CV (08/21)       Google Scholar
 NASA Jet Propulsion Laboratory Machine Learning & Instrument Autonomy Group Summer Research Intern: Caltech SURF@JPL Jun 2021 – Aug 2021 The Chinese University of Hong Kong Network Science and Optimization Laboratory Advisor: Prof. Hoi-To Wai Winter + Fall Research Intern | Topic: Decentralized Optimization In decentralized consensus optimization, data is partitioned privately among $$\mathbb{N}$$ workers, and the goal is to minimize each worker's objective function $$f_{i}(\theta)$$ while ensuring that $$\mathbb{N}$$ workers agree about the underlying distribution of $$\theta$$. The catch: $$\mathbb{N}$$ workers are distributed over a sparse graph topology, and can only communicate with immidiate neighbours. Applications include sensor networks and privacy preserving machine learning. Nov 2020 – May 2021 The Chinese University of Hong Kong Department of Computer Science and Engineering Advisor: Prof. Bei Yu Summer Research Intern | Topic: Adversarial RobustnessAdversarial Examples, which can better be visualized as imperceptible ‘distribution’ shifts in data are a natural consequence of the dimensionality gap between inputs and linear models on which high-dimensional inputs are trained on. They generalize across different architectures, and can be used in a ‘black-box’ fashion to threaten real-world deep learning models. The most common strategy to defend against test-time attacks has been to train models on adversarial data, thus ensuring some ‘robustness‘ against standard attacks. May 2020 – Sep 2021