Hossein Souri

I am a PhD candidate in Computer Science at Johns Hopkins University, where I have the privilege of being supervised by Bloomberg Distinguished Professor Rama Chellappa. My research is further enhanced by significant collaborations with Tom Goldstein, Micah Goldblum, and Soheil Feizi at the University of Maryland, as well as with Andrew Gordon Wilson and Yann LeCun from NYU and Meta. I have published multiple papers in top-tier conferences and journals, such as NeurIPS, ICLR, ICML, and TPAMI.

I'm a Graduate Reseach Assistant at Artificial Intelligence for Engineering and Medicine Lab (AIEM). Before joining JHU, I obtained my MS in Electrical and Computer Engineering from University of Maryland, College Park. During my Masters studies, I was a research assistant at University of Maryland Institute for Advanced Computer Studies (UMIACS).

Email  /  Google Scholar  /  GitHub  /  Twitter  /  LinkedIn  /  CV

profile photo
News
  • Jan-2024: One paper accepted to IEEE ICASSP 2024.
  • Sep-2023: One paper accepted to NeurIPS 2023.
  • June-2023: One paper accepted to TPAMI.
  • May-2023: One papers accepted to AAAI/ACM Conference on AI, Ethics, and Society 2023.
  • Sep-2022: Two papers accepted to NeurIPS 2022.
  • July-2022: Three papers accepted to ICML 2022 Workshops.
  • June-2022: One paper accepted to IEEE TIFS.
  • Jan-2022: One paper accepted to ICLR 2022.
  • Nov-2020: One oral paper accepted to IEEE FG 2020 with best paper (Honorable Mention) award.
Research

My primary research is applied machine learning and computer vision, focusing on improving the robustness, transferability, and performance of image/face/video classifiers, object detection/segmentation, and generative models. Those include adversarial robustness, transfer learning, self-supervised learning, generative models, as well as, data poisoning and backdoor attacks.

Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri, Arpit Bansal, Hamid Kazemi, Liam Fowl, Aniruddha Saha, Jonas Geiping, Andrew Gordon Wilson, Rama Chellappa, Tom Goldstein, Micah Goldblum
2024
PDF / arXiv / code

In this work, we use guided diffusion to synthesize base samples from scratch that lead to significantly more potent poisons and backdoors than previous state-of-the-art attacks. Our Guided Diffusion Poisoning (GDP) base samples can be combined with any downstream poisoning or backdoor attack to boost its effectiveness.

Identifying Attack-Specific Signatures in Adversarial Examples
Hossein Souri, Pirazh Khorramshahi, Chun Pong Lau, Micah Goldblum, Rama Chellappa
IEEE ICASSP, 2024
PDF / arXiv

The adversarial attack literature contains a myriad of algorithms for crafting perturbations which yield pathological behavior in neural networks. In many cases, multiple algorithms target the same tasks and even enforce the same constraints. In this work, we show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.

Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks
Micah Goldblum*, Hossein Souri*, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein
NeurIPS, 2023
PDF / arXiv / code

Battle of the Backbones (BoB) is a large-scale comparison of pretrained vision backbones including SSL, vision-language models, and CNNs vs ViTs across diverse downstream tasks including classification, object detection, segmentation, out-of-distribution (OOD) generalization, and image retrieval.

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa
TPAMI
PDF / arXiv

In this paper, we propose a novel threat model called Joint Space Threat Model (JSTM), which can serve as a special case of the neural perceptual threat model that does not require additional relaxation to craft the corresponding adversarial attacks. We also propose Intepolated Joint Space Adversarial Training (IJSAT), which applies Robust Mixup strategy and trains the model with JSA samples.

A Deep Dive into Dataset Imbalance and Bias in Face Identification
Valeriia Cherepanova, Steven Reich, Samuel Dooley, Hossein Souri, Micah Goldblum, Tom Goldstein
AIES, 2023
PDF / arXiv

In this paper, we explore the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri, Liam Fowl, Rama Chellappa, Micah Goldblum, Tom Goldstein
NeurIPS, 2022
PDF / arXiv / code

Typical backdoor attacks insert the trigger directly into the training data, although the presence of such an attack may be visible upon inspection. We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process. Sleeper Agent is the first hidden trigger backdoor attack to be effective against neural networks trained from scratch. We demonstrate its effectiveness on ImageNet and in black-box settings.

Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors
Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, Andrew Gordon Wilson
NeurIPS, 2022
PDF / arXiv / code

Our Bayesian transfer learning framework transfers knowledge from pre-training to downstream tasks. To up-weight parameter settings consistent with a pre-training loss function, we fit a probability distribution over the parameters of feature extractors to a pre-training loss function and rescale it as a prior.

The Close Relationship Between Contrastive Learning and Meta-Learning
Renkun Ni, Manli Shu, Hossein Souri, Micah Goldblum, Tom Goldstein
ICLR, 2022
PDF / arXiv / code

In this paper, we discuss the close relationship between contrastive learning and meta-learning under a certain task distribution. We complement this observation by showing that established meta-learning methods, such as Prototypical Networks, achieve comparable performance to SimCLR when paired with this task distribution.

Mutual Adversarial Training: Learning together is better than going alone
Jiang Liu, Chun Pong Lau, Hossein Souri, Soheil Feizi, Rama Chellappa
IEEE TIFS, 2022
IEEE / PDF / arXiv

In this paper, we propose mutual adversarial training (MAT), in which multiple models are trained together and share the knowledge of adversarial examples to achieve improved robustness. MAT allows robust models to explore a larger space of adversarial samples, and find more robust feature spaces and decision boundaries.

GANs with Variational Entropy Regularizers: Applications in Mitigating the Mode-Collapse Issue
Pirazh Khorramshahi*, Hossein Souri*, Rama Chellappa, Soheil Feizi
arXiv, 2020
PDF / arXiv

GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution. To tackle this issue, we take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity. We call this approach GANs with Variational Entropy Regularizers (GAN+VER).

An adversarial learning algorithm for mitigating gender bias in face recognition
Prithviraj Dhar, Joshua Gleason, Hossein Souri, Carlos D. Castillo, Rama Chellappa
arXiv, 2020
PDF / arXiv

A novel approach called "Adversarial Gender De-biasing (AGD)" to help mitigate gender bias in face recognition by reducing the strength of gender information in face recognition features.


Updated Version
Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face Recognition
ATFaceGAN: Single Face Image Restoration and Recognition from Atmospheric Turbulence
Chun Pong Lau, Hossein Souri, Rama Chellappa
FG, 2019   (Oral Presentation)
IEEE / PDF / arXiv

In this work we propose a generative single frame restoration algorithm which disentangles the blur and deformation due to turbulence and reconstructs a restored image.

Academic Service
  • Conference Reviewer: CVPR, NeurIPS, ICLR, ICML, ECCV, ICCV, WACV
  • Journal Reviewer: Pattern Recognition Journal
Research Experience
  • Research Assistant, Johns Hopkins University, Aug 2020 - Present
  • Research Assistant, University of Maryland, 2018 - 2020
  • Research Assistant, University of Tehran, 2016 - 2018
Teaching Experience
  • Teaching Assistant, Machine Intelligence, Johns Hopkins University, Spring 2021, Spring 2022
  • Teaching Assistant, Machine Perception, Johns Hopkins University, Fall 2021
  • Teaching Assistant, University of Maryland, College Park, Fall 2018 - Spring 2019

This theme has been stolen from Jon Barron's website :-|

Last update: April, 2024