M.Sc. in Communication Systems (Specialized in Data Analytics)
Ecole polytechnique fédérale de Lausanne, Switzerland 2019 - 2022
B.Eng.(Hons) in Electronics and Electrical Engineering (First Class)
The University of Edinburgh, United Kingdom 2017 - 2019
DSR: Towards Drone Image Super-Resolution
We propose a novel drone image dataset, with scenes captured at low and high resolutions, and across a span of altitudes. Our results show that off-the-shelf state-of-the-art networks witness a significant drop in performance in this different domain. We additionally show that simple fine-tuning and incorporating altitude awareness into the network's architecture, both improve the reconstruction performance.
Fidelity Estimation Improves Noisy-Image Classification With Pretrained Networks
IEEE SPL 2021
We propose a method that can be applied to a pretrained classifier. Our method exploits a fidelity map estimate that is fused into the internal representations of the feature extractor, thereby guiding the attention of the network and making it more robust to noisy data. We improve the noisy-image classification results by significantly large margins, especially at high noise levels, and come close to the fully retrained approaches.
Deep Gaussian Denoiser Epistemic Uncertainty and Decoupled Dual-Attention Fusion
IEEE ICIP 2021
We propose a model-agnostic approach for reducing epistemic uncertainty while using a single pretrained network. We achieve this by tapping into the epistemic uncertainty through augmented and frequency-manipulated images to obtain denoised images with varying errors. We propose an ensemble method with two decoupled attention paths, over the pixel domain and over that of our different manipulations, to learn the final fusion. Our results significantly improve over the state-of-the-art baselines.
Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation
ACM ResSys 2020 (Best Shot Paper)
The recommendation requires continual adaptation to take into account new and obsolete items and users and requires “continual learning”. We propose a method called Adaptively Distilled Exemplar Replay (ADER) by periodically replaying previous training samples to the current model with an adaptive distillation loss. We empirically demonstrate that ADER consistently outperforms other baselines, and it even outperforms the method using all historical data at every update cycle.