Xiaochuang Han  

(You can call me Han, which is easier to pronounce and remember)

firstname.lastname@gmail.com

[CV] [Google Scholar]



Bio

I'm a PhD student in Computer Science and Engineering at the University of Washington, advised by Yulia Tsvetkov. I'm generally interested in natural language processing and multimodal generation. I have worked on topics like codec-based multimodal generation, diffusion language models, inference-time model collaboration, training data attribution, etc. Before UW, I was a Master of Language Technologies student at Carnegie Mellon University. Before CMU, I was an undergrad at Georgia Tech, advised by Jacob Eisenstein. I have been supported by OpenAI Superalignment Fellowship (2024) and Meta AI Mentorship Program (2023, 2022).


Selected Publications

Please see my Google Scholar or CV for a full list of publications.

JPEG-LM: LLMs as Image Generators with Canonical Codec Representations
Xiaochuang Han, Marjan Ghazvininejad, Pang Wei Koh, and Yulia Tsvetkov.
arXiv preprint

David helps Goliath: Inference-Time Collaboration Between Small Specialized and Large General Diffusion LMs
Xiaochuang Han, Sachin Kumar, Yulia Tsvetkov, and Marjan Ghazvininejad.
NAACL 2024

Tuning Language Models by Proxy
Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, Noah Smith.
COLM 2024

Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
Weijia Shi*, Xiaochuang Han*, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, and Scott Wen-tau Yih.
NAACL 2024

In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning
Xiaochuang Han.
arXiv preprint

SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control
Xiaochuang Han, Sachin Kumar, and Yulia Tsvetkov.
ACL 2023

Understanding In-Context Learning via Supportive Pretraining Data
Xiaochuang Han, Daniel Simig, Todor Mihaylov, Yulia Tsvetkov, Asli Celikyilmaz, and Tianlu Wang.
ACL 2023

Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
Weijia Shi*, Xiaochuang Han*, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, and Luke Zettlemoyer.
Findings of EMNLP 2023

ORCA: Interpreting Prompted Language Models via Locating Supporting Data Evidence in the Ocean of Pretraining Data
Xiaochuang Han and Yulia Tsvetkov.
arXiv preprint

Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates
Xiaochuang Han and Yulia Tsvetkov.
Findings of EMNLP 2021

Fortifying Toxic Speech Detectors Against Veiled Toxicity
Xiaochuang Han and Yulia Tsvetkov.
EMNLP 2020

Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov.
ACL 2020

Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling
Xiaochuang Han and Jacob Eisenstein.
EMNLP 2019

No Permanent Friends or Enemies: Tracking Dynamic Relationships between Nations from News
Xiaochuang Han, Eunsol Choi, and Chenhao Tan.
NAACL 2019

Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's Neutrality Norm
Umashanthi Pavalanathan, Xiaochuang Han, and Jacob Eisenstein.
CSCW 2018